Cluster API to production: authentication with service accounts and RBAC using External Secrets and Kyverno

Part 3 of Cluster API to Production

January 23rd, 2025

Service accounts are the Kubernetes way for authenticating non-human actors. Having a service account for the management cluster in tenant clusters is useful for many use cases: from access to resources on the management cluster, to access to services with Kubernetes authentication such as secret managers, container registries and object stores.

Providing each tenant cluster with a service account solves the problem of authentication from tenant clusters to services. It provides each tenant cluster with unique credentials and access control policy.

Getting the service account token in the tenant cluster

Creating a service account and assorted role and role binding is easy. The big question is how do we give tenant clusters access to the service account?

After a few iterations, I landed on using External Secrets on the management cluster with a PushSecret to push the service account token secret. External Secrets pushes the token secret faster compared to Argo CD, and avoids displaying the token in the Argo CD dashboard. External Secrets is also useful for pulling secrets into the management cluster, making it useful for more than the single task of pushing service account tokens to tenant clusters.

Requirements for following the guide

This guide assumes you have Kyverno, External Secrets, and Cluster API with Cluster API provider KubeVirt installed on the management cluster. The guide will work with other Cluster API providers with minimal modification of the Kyverno policy.

This article refers to the cluster running KubeVirt and Cluster API as the management cluster, and to clusters managed by Cluster API as tenant clusters.

Kyverno policy for generating SecretStore

Similarly to part 2 of the series, we need to configure an External Secrets SecretStore on the management cluster based on the kubeconfig secret generated by Cluster API. The SecretStore must be generated because it requires credentials in a special structure, different from the kubeconfig secret.

The following policy generates a SecretStore from Cluster API kubeconfig secrets. Using this policy, we can push secrets to tenant clusters by specifying a SecretStore with the same name as the cluster.

# Inside secretstore-policy.yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: secretstore-generation-from-capi
spec:
  rules:
    - name: auth-secret
      match:
        all:
          - resources:
              kinds:
                - v1/Secret
              names:
                - "*-kubeconfig"
              selector:
                matchLabels:
                  cluster.x-k8s.io/cluster-name: "*"
      context:
        - name: clusterName
          variable:
            jmesPath: 'request.object.metadata.labels."cluster.x-k8s.io/cluster-name"'
        - name: kubeconfig
          variable:
            jmesPath: "request.object.data.value | base64_decode(@) | parse_yaml(@)"
      generate:
        synchronize: true
        apiVersion: v1
        kind: Secret
        namespace: "{{ request.object.metadata.namespace }}"
        name: "{{ clusterName }}-secretstore-auth"
        data:
          metadata:
            labels:
              argocd.argoproj.io/secret-type: cluster
          type: Opaque
          data:
            caData: '{{ kubeconfig.clusters[0].cluster."certificate-authority-data" }}'
            clientCert: '{{ kubeconfig.users[0].user."client-certificate-data" }}'
            clientKey: '{{ kubeconfig.users[0].user."client-key-data" }}'
    - name: secretstore
      match:
        all:
          - resources:
              kinds:
                - v1/Secret
              names:
                - "*-kubeconfig"
              selector:
                matchLabels:
                  cluster.x-k8s.io/cluster-name: "*"
      context:
        - name: clusterName
          variable:
            jmesPath: 'request.object.metadata.labels."cluster.x-k8s.io/cluster-name"'
        - name: kubeconfig
          variable:
            jmesPath: "request.object.data.value | base64_decode(@) | parse_yaml(@)"
      generate:
        synchronize: true
        apiVersion: external-secrets.io/v1beta1
        kind: SecretStore
        namespace: "{{ request.object.metadata.namespace }}"
        name: "{{ clusterName }}"
        data:
          spec:
            provider:
              kubernetes:
                remoteNamespace: kube-system
                server:
                  url: "{{ kubeconfig.clusters[0].cluster.server }}"
                  caProvider:
                    type: Secret
                    name: "{{ clusterName }}-secretstore-auth"
                    key: caData
                auth:
                  cert:
                    clientCert:
                      name: "{{ clusterName }}-secretstore-auth"
                      key: clientCert
                    clientKey:
                      name: "{{ clusterName }}-secretstore-auth"
                      key: clientKey

For a step-by-step walk-through of the policy check out part 2 of the series. It contains a detailed explanation of a very similar policy.

Note the SecretStore being generated has kube-system set as the remote namespace. When specifying a namespace that does not exist on cluster creation but is applied by Argo CD, External Secrets must wait for Argo CD to create the namespace before pushing the secret. When using a namespace that already exists on cluster creation External Secrets can push the secret immediately.

Apply the policy to the management cluster:

kubectl apply -f secretstore-policy.yaml

Creating a tenant cluster

Next step is to create a new tenant cluster. We must create a new cluster for Kyverno to pick up the new kubeconfig secret and generate a matching SecretStore. This step is omitted for brevity, as this guide assumes you already know how to create a new Cluster API cluster.

See the cluster Helm chart from my infrastructure charts repository for a full example including cluster creation.

Configuring RBAC and a service account

Next step is to configure RBAC and a service account for our tenant cluster.

We’ll start with the role. Note this role grants access to a secret named external-dns-credentials in the secrets-example namespace. We’re going to access this secret later in this guide as an example of using the service account token from the tenant cluster.

# Inside manifests/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: foo-example-cluster
  namespace: secrets-example
rules:
  - apiGroups:
    - ""
    resourceNames:
    - external-dns-credentials
    resources:
    - secrets
    verbs:
    - get                                                                                                                       - list                                                                                                                      - watch
  - apiGroups:
    - authorization.k8s.io
    resources:
    - selfsubjectrulesreviews
    verbs:
    - create

Next come the service account and role binding creating the service account for the tenant and binding it to the role we created.

# Inside manifests/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: example-cluster
# Inside manifests/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: example-cluster
  namespace: secrets-example
subjects:
  - kind: ServiceAccount
    name: example-cluster
    namespace: foo
roleRef:
  kind: Role
  name: foo-example-cluster
  apiGroup: rbac.authorization.k8s.io

Last step is to configure a service account token secret. This is the secret we’re going to push with External Secrets to the tenant cluster.

# Inside manifests/secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: example-cluster-token
  annotations:
    kubernetes.io/service-account.name: example-cluster
type: kubernetes.io/service-account-token

Pushing the service account token to the tenant cluster

Now that we have a secret containing a service account token, we can push it to the tenant cluster by creating the following PushSecret:

# Inside manifests/pushsecret.yaml
apiVersion: external-secrets.io/v1alpha1
kind: PushSecret
metadata:
  name: example-cluster-serviceaccount
spec:
  refreshInterval: 1h
  secretStoreRefs:
    - name: example-cluster
      kind: SecretStore
  selector:
    secret:
      name: example-cluster-token
  template:
    metadata:
      annotations: {}
      labels: {}
  data:
    - match:
        secretKey: ca.crt
        remoteRef:
          remoteKey: management-serviceaccount-token
          property: ca.crt
    - match:
        secretKey: namespace
        remoteRef:
          remoteKey: management-serviceaccount-token
          property: namespace
    - match:
        secretKey: token
        remoteRef:
          remoteKey: management-serviceaccount-token
          property: token

Note spec.template.metadata contains empty annotations and labels maps. Otherwise External Secrets attempts to copy the secret’s UID, causing a failure.

We now have everything ready. Apply the manifests on the management cluster:

kubectl apply -f manifests

Once everything is applied, check the tenant cluster contains the secret with the service account token:

kubectl get --namespace kube-system secret management-serviceaccount-token

Using the service account to pull secrets

Now that we have a service account token in the tenant cluster, we can use it to authenticate to the management cluster. This quick example uses External Secrets to pull a secret from the management cluster to the tenant cluster.

This usage example assumes you have External Secrets installed in the tenant cluster.

Create a ClusterSecretStore pointing to the management-serviceaccount-token secret we pushed from the management cluster:

# Inside tenant-manifests/clustersecretstore.yaml
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
  name: management
spec:
  provider:
    kubernetes:
      remoteNamespace: secrets-example
      server:
        # Replace with the url of the management cluster Kubernetes API:
        url: https://10.1.0.1:6443
        caProvider:
          type: Secret
          name: management-serviceaccount-token
          namespace: kube-system
          key: ca.crt
      auth:
        token:
          bearerToken:
            name: management-serviceaccount-token
            namespace: kube-system
            key: token

Don’t forget to replace spec.provider.kubernetes.server.url with the url of the management cluster Kubernetes API.

Create an ExternalSecret referring to the secret we gave the service account access to in the role:

# Inside tenant-manifests/externalsecret.yaml
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: example-external-dns-credentials
  namespace: external-dns
spec:
  refreshInterval: 1h
  secretStoreRef:
    kind: ClusterSecretStore
    name: management
  target:
    name: external-dns-credentials
  data:
  - secretKey: id
    remoteRef:
      key: external-dns-credentials
      property: id
  - secretKey: secret
    remoteRef:
      key: external-dns-credentials
      property: secret

Apply the manifests in the tenant cluster:

kubectl apply -f tenant-manifests

Check the tenant cluster for the secret pulled by External Secrets:

kubectl get --namespace external-dns secret external-dns-credentials

Full example in a Helm chart

As always with this article series, it ends with a link to a full example. See the cluster chart from my infrastructure charts repositories. Specifically, check out the rbac templates subfolder as it contains templates for all resources described in this guide.

You can also check out the cluster-components chart which is deployed into the tenant cluster by an Argo CD application in the cluster chart. The cluster-components chart includes templates for a ClusterSecretStore and ExternalSecret resources similar to the usage example in this guide.