Deploy CockroachDB with the CockroachDB Operator

On this page Carat arrow pointing down

This page describes how to start and stop a secure 3-node CockroachDB cluster in a single Kubernetes cluster.

Note:

The CockroachDB operator is in Preview.

Prerequisites and best practices

Kubernetes version

To deploy CockroachDB v25.1 or later, Kubernetes 1.30 or higher is required. Cockroach Labs strongly recommends that you use a Kubernetes version that is eligible for patch support by the Kubernetes project.

Helm version

The CockroachDB Helm chart requires Helm 3.0 or higher. If you attempt to use an incompatible Helm version, an error like the following occurs:

Error: UPGRADE FAILED: template: cockroachdb/templates/tests/client.yaml:6:14: executing "cockroachdb/templates/tests/client.yaml" at <.Values.networkPolicy.enabled>: nil pointer evaluating interface {}.enabled

There are two Helm charts that must be deployed:

  • operator: The CockroachDB operator chart to be installed first.
  • cockroachdb: The CockroachDB application chart to be installed after the operator is ready.

Network

Service Name Indication (SNI) is an extension to the TLS protocol that allows a client to indicate which hostname it is attempting to connect to at the start of the TCP handshake process. The server can present multiple certificates on the same IP address and TCP port number, and one server can serve multiple secure websites or API services even if they use different certificates.

Due to its order of operations, the PostgreSQL wire protocol's implementation of TLS is incompatible with SNI-based routing in the Kubernetes ingress controller. Instead, use a TCP load balancer for CockroachDB that is not shared with other services.

If you want to secure your cluster to use TLS certificates for all network communications, Helm must be installed with RBAC privileges. Otherwise, you will get an attempt to grant extra privileges error.

Localities

CockroachDB clusters use localities to efficiently distribute replicas. This is especially important in multi-region deployments. With the CockroachDB operator, you specify mappings between locality levels and the location on a Kubernetes node where the value for that locality can be found.

In cloud provider deployments (e.g., GKE, EKS, or AKS), the topology.kubernetes.io/region and topology.kubernetes.io/zone values on Kubernetes nodes are populated by cloud provider. For further granularity, you can define arbitrary locality labels (e.g., province, datacenter, rack), but these need to be applied individually to the Kubernetes node when initialized so that CockroachDB can understand where the node lives and distribute replicas accordingly.

On bare metal Kubernetes deployments, you must plan a hierarchy of localities that suit your CockroachDB node distribution, then apply these values individually to nodes when they are initialized. Although you can set most of these values arbitrarily, you must set region and zone locations in the reserved topology.kubernetes.io/region and topology.kubernetes.io/zone namespaces, respectively.

For more information on how locality labels are used by CockroachDB, refer to the --locality documentation.

Architecture

The CockroachDB operator is only supported in environments with an ARM64 or AMD64 architecture.

Resources

When starting Kubernetes, select machines with at least 4 vCPUs and 16 GiB of memory, and provision at least 2 vCPUs and 8 GiB of memory to CockroachDB per pod. These minimum settings are used by default in this deployment guide, and are appropriate for testing purposes only. On a production deployment, you should adjust the resource settings for your workload.

Storage

Kubernetes deployments use external persistent volumes that are often replicated by the provider. CockroachDB replicates data automatically, and this redundant layer of replication can impact performance. Using local volumes may improve performance.

Step 1. Start Kubernetes

You can use the hosted Google Kubernetes Engine (GKE) service, hosted Amazon Elastic Kubernetes Service (EKS), or Microsoft Azure Kubernetes Service (AKS) to quickly start Kubernetes.

Tip:

Cloud providers such as GKE, EKS, and AKS are not required to run CockroachDB on Kubernetes. You can use any cluster hardware with the minimum recommended Kubernetes version and at least 3 pods, each presenting sufficient resources to start a CockroachDB node. However, note that support for other deployments may vary.

Hosted GKE

  1. Complete the Before You Begin steps described in the Google Kubernetes Engine Quickstart documentation.

    This includes installing gcloud, which is used to create and delete Kubernetes Engine clusters, and kubectl, which is the command-line tool used to manage Kubernetes from your workstation.

    The documentation offers the choice of using Google's Cloud Shell product or using a local shell on your machine. Choose to use a local shell if you want to be able to view the DB Console using the steps in this guide.

  2. From your local workstation, start the Kubernetes cluster, specifying one of the available regions (e.g., us-east1).

    The process can take a few minutes, so do not move on to the next step until you see a Creating cluster cockroachdb...done message and details about your cluster.

    icon/buttons/copy
    gcloud container clusters create cockroachdb --machine-type n2-standard-4 --region {region-name} --num-nodes 1
    
    Creating cluster cockroachdb...done.
    
    Note:

    Since this region can differ from your default gcloud region, be sure to include the --region flag to run gcloud commands against this cluster.

    This creates GKE instances and joins them into a single Kubernetes cluster named cockroachdb. The --region flag specifies a regional three-zone cluster, and --num-nodes specifies one Kubernetes worker node in each zone.

    The --machine-type flag tells the node pool to use the n2-standard-4 machine type (4 vCPUs, 16 GB memory), which meets our recommended CPU and memory configuration.

    Note:

    Consider creating another, dedicated node group for the operator pod for system resource availability.

  3. Get the email address associated with your Google Cloud account:

    icon/buttons/copy
    gcloud info | grep Account
    
    Account: [your.google.cloud.email@example.org]
    

    The preceding command returns your email address in all lowercase. However, in the next step, you must enter the address using the accurate capitalization. For example, if your address is YourName@example.com, you must use YourName@example.com and not yourname@example.com.

  4. Create the RBAC roles CockroachDB needs for running on GKE, using the address from the previous step:

    icon/buttons/copy
    kubectl create clusterrolebinding $USER-cluster-admin-binding \
      --clusterrole=cluster-admin \
      --user={your.google.cloud.email@example.org}
    
    clusterrolebinding.rbac.authorization.k8s.io/your.username-cluster-admin-binding created
    

Hosted EKS

  1. Complete the steps described in the EKS Getting Started documentation.

    This includes installing and configuring the AWS CLI and eksctl, which is the command-line tool used to create and delete Kubernetes clusters on EKS, and kubectl, which is the command-line tool used to manage Kubernetes from your workstation.

    If you are running EKS-Anywhere, CockroachDB requires that you configure your default storage class to auto-provision persistent volumes. Alternatively, you can define a custom storage configuration as required by your install pattern.

  2. From your local workstation, start the Kubernetes cluster:

    To ensure that all 3 nodes can be placed into a different availability zone, you may want to first confirm that at least 3 zones are available in the region for your account.

    Cluster provisioning usually takes between 10 and 15 minutes. Do not move on to the next step until you see a message like [✔] EKS cluster "cockroachdb" in "us-east-1" region is ready and details about your cluster.

    icon/buttons/copy
    eksctl create cluster \
      --name cockroachdb \
      --nodegroup-name standard-workers \
      --node-type m6i.xlarge \
      --nodes 3 \
      --nodes-min 1 \
      --nodes-max 4 \
      --node-ami auto
    

    This creates EKS instances and joins them into a single Kubernetes cluster named cockroachdb. The --node-type flag tells the node pool to use the m6i.xlarge instance type (4 vCPUs, 16 GB memory), which meets our recommended CPU and memory configuration.

    Note:

    Consider creating another, dedicated node group for the operator pod for system resource availability.

  3. Open the AWS CloudFormation console to verify that the stacks eksctl-cockroachdb-cluster and eksctl-cockroachdb-nodegroup-standard-workers were successfully created. Be sure that your region is selected in the console.

Hosted AKS

  1. Complete the Before you begin, Define environment variables, and Create a resource groups steps described in the AKS quickstart guide. This includes setting up the Azure CLI and the az tool, which is the command-line tool to create and manage Azure cloud resources.

    Set the environment variables as desired for your CockroachDB deployment. For these instructions, set the MY_AKS_CLUSTER_NAME variable to cockroachdb.

    Do not follow the Create an AKS cluster steps or following sections of the AKS quickstart guide, as these topics will be described specifically for CockroachDB in this documentation.

  2. From your workstation, create the Kubernetes cluster:

    icon/buttons/copy
    az aks create \
      --resource-group $MY_RESOURCE_GROUP_NAME \
      --name $MY_AKS_CLUSTER_NAME \
      --node-count 3 \
      --generate-ssh-keys
    
  3. Create an application in your Azure tenant and create a secret named azure-cluster-identity-credentials-secret that contains AZURE_CLIENT_ID and AZURE_CLIENT_SECRET to hold the application credentials. You can use the following example YAML to define this application:

    apiVersion: v1
    kind: Secret
      metadata:
        name: azure-cluster-identity-credentials-secret
        type: Opaque
        stringData:
          azure-credentials: |
          azure_client_id: 11111111-1111-1111-1111-111111111111
          azure_client_secret: s3cr3t
    

    For more information on how to use these variables, refer to the Azure.Identity documentation.

Bare metal deployments

For bare metal deployments, the specific Kubernetes infrastructure deployment steps should be similar to those described in Hosted GKE and Hosted EKS.

  • You must plan a hierarchy of locality labels that suit your CockroachDB node distribution, then apply these labels individually to nodes when they are initialized. Although you can set most of these values arbitrarily, you must set region and zone locations in the reserved topology.kubernetes.io/region and topology.kubernetes.io/zone namespaces, respectively.

Step 2. Start CockroachDB

Install the operator sub-chart

  1. Check out the CockroachDB Helm repository from GitHub:

    icon/buttons/copy
    git clone https://github.com/cockroachdb/helm-charts.git
    
  2. Set your environment variables. This step is optional but recommended in order to use the example commands and templates described in the following instructions. Note the default Kubernetes namespace of cockroach-ns.

    icon/buttons/copy
    export CRDBOPERATOR=crdb-operator
    export CRDBCLUSTER=cockroachdb
    export NAMESPACE=cockroach-ns
    
  3. Install the operator sub-chart:

    icon/buttons/copy
    kubectl create namespace $NAMESPACE
    
    icon/buttons/copy
    helm install $CRDBOPERATOR ./cockroachdb-parent/charts/operator -n $NAMESPACE
    

Initialize the cluster

  1. Open cockroachdb-parent/charts/cockroachdb/values.yaml, a values file that tells Helm how to configure the Kubernetes cluster, in your text editor.

  2. Modify the cockroachdb.crdbCluster.regions section to describe the number of CockroachDB nodes to deploy and what region(s) to deploy them in. Replace the default cloudProvider with the appropriate value (gcp, aws, azure). For bare metal deployments, you can remove the cloudProvider field. The following example initializes three nodes on Google Cloud in the us-central1 region:

    cockroachdb:
      crdbCluster:
        regions:
          - code: us-central1
            nodes: 3
            cloudProvider: gcp
            namespace: cockroach-ns
    
    Note:

    If you intend to deploy CockroachDB nodes across multiple different regions, follow the additional steps described in Deploy across multiple regions.

  3. Uncomment and modify cockroachdb.crdbCluster.resources in the values file with the CPU and memory requests and limits for each node to use. The default values are 4vCPU and 16GiB of memory:

    For more information on configuring node resource allocation, refer to Resource management

  4. Modify the TLS configuration as desired. For a secure deployment, set cockroachdb.tls.enabled in the values file to true. You can either allow the operator to generate self-signed certificates, provide a custom CA certificate and generate other certificates, or use your own certificates.

    • All self-signed certificates: By default, the certificates are created automatically by a self-signer utility, which requires no configuration beyond setting a custom certificate duration if desired. This utility creates self-signed certificates for the nodes and root client which are stored in a secret. You can see these certificates by running kubectl get secrets:

      icon/buttons/copy
      kubectl get secrets
      
      crdb-cockroachdb-ca-secret                 Opaque                                2      23s
      crdb-cockroachdb-client-secret             kubernetes.io/tls                     3      22s
      crdb-cockroachdb-node-secret               kubernetes.io/tls                     3      23s
      
      Note:

      If you are deploying on OpenShift you must also set cockroachdb.tls.selfSigner.securityContext.enabled to false to mitigate stricter security policies.

    • Custom CA certificate: If you wish to supply your own CA certificates to the deployed nodes but allow automatic generation of client certificates, create a Kubernetes secret with the custom CA certificate. To perform these steps using the cockroach cert command:

      icon/buttons/copy
      mkdir certs
      
      icon/buttons/copy
      mkdir my-safe-directory
      
      icon/buttons/copy
      cockroach cert create-ca --certs-dir=certs --ca-key=my-safe-directory/ca.key
      

      Set cockroachdb.tls.selfSigner.caProvided to true and specify the secret where the certificate is stored:

      cockroachdb:
        tls:
          enabled: true
          selfSigner:
            enabled: true
            caProvided: true
            caSecret: {ca-secret-name}
      
      Note:

      If you are deploying on OpenShift you must also set cockroachdb.tls.selfSigner.securityContext.enabled to false to mitigate stricter security policies.

    • All custom certificates: Set up your certificates and load them into your Kubernetes cluster as secrets using the following commands:

      icon/buttons/copy
      mkdir certs
      
      icon/buttons/copy
      mkdir my-safe-directory
      
      icon/buttons/copy
      cockroach cert create-ca --certs-dir=certs --ca-key=my-safe-directory/ca.key
      
      icon/buttons/copy
      cockroach cert create-client root --certs-dir=certs --ca-key=my-safe-directory/ca.key
      
      icon/buttons/copy
      kubectl create secret generic cockroachdb-root --from-file=certs
      
      secret/cockroachdb-root created
      
      icon/buttons/copy
      cockroach cert create-node --certs-dir=certs --ca-key=my-safe-directory/ca.key localhost 127.0.0.1 my-release-cockroachdb-public my-release-cockroachdb-public.cockroach-ns my-release-cockroachdb-public.cockroach-ns.svc.cluster.local *.my-release-cockroachdb *.my-release-cockroachdb.cockroach-ns *.my-release-cockroachdb.cockroach-ns.svc.cluster.local
      kubectl create secret generic cockroachdb-node --from-file=certs
      
      secret/cockroachdb-node created
      
      Note:

      The subject alternative names are based on a release called my-release in the cockroach-ns namespace. Make sure they match the services created with the release during Helm install.

      If you wish to supply certificates with cert-manager, set cockroachdb.tls.certManager.enabled to true, and cockroachdb.tls.certManager.issuer to an IssuerRef (as they appear in certificate resources) pointing to a clusterIssuer or issuer that you have set up in the cluster:

      cockroachdb:
        tls:
          enabled: true
          certManager:
            enabled: true
            caConfigMap: cockroachdb-ca
            nodeSecret: cockroachdb-node
            clientRootSecret: cockroachdb-root
            issuer:
              group: cert-manager.io
              kind: Issuer
              name: cockroachdb-cert-issuer
              clientCertDuration: 672h
              clientCertExpiryWindow: 48h
              nodeCertDuration: 8760h
              nodeCertExpiryWindow: 168h
      

      The following Kubernetes application describes an example issuer.

      apiVersion: v1
      kind: Secret
      metadata:
        name: cockroachdb-ca
        namespace: cockroach-ns
      data:
        tls.crt: [BASE64 Encoded ca.crt]
        tls.key: [BASE64 Encoded ca.key]
      type: kubernetes.io/tls
      ---
      apiVersion: cert-manager.io/v1alpha3
      kind: Issuer
      metadata:
        name: cockroachdb-cert-issuer
        namespace: cockroach-ns
      spec:
        ca:
          secretName: cockroachdb-ca
      

      If your certificates are stored in TLS secrets, such as secrets generated by cert-manager, the secret will contain files named: ca.crt, tls.crt, and tls.key.

      For CockroachDB, rename these files as applicable to match the following naming scheme: ca.crt, node.crt, node.key, client.root.crt, and client.root.key.

      Add the following to the values file:

      cockroachdb:
        tls:
          enabled: true
          externalCertificates:
            enabled: true
            certificates:
              nodeSecretName: {node_secret_name}
              nodeClientSecretName: {client_secret_name}
      

      Replacing the following:

      • {node_secret_name}: The name of the Kubernetes secret that contains the generated client certificate and key.
      • {client_secret_name}: The name of the Kubernetes secret that contains the generated node certificate and key.

      For a detailed tutorial of a TLS configuration with manual certificates, refer to Authenticate with cockroach cert.

  5. In cockroachdb.crdbCluster.localityMappings, provide locality mappings that define locality levels and map them to node labels where the locality information of each Kubernetes node is stored. When CockroachDB is initialized on a node, it processes these values as though they are provided through the cockroach start --locality flag.

    If localityMappings is not configured, by default the CockroachDB operator uses the region and zone locality labels, mapped implicitly to the topology.kubernetes.io/region and topology.kubernetes.io/zone node labels.

    • In cloud provider deployments, the topology.kubernetes.io/region and topology.kubernetes.io/zone values on a node are populated by the cloud provider.
    • In bare metal deployments, the topology.kubernetes.io/region and topology.kubernetes.io/zone node label values are not set implicitly by a cloud provider when initializing the node, so you must set them manually or configure custom locality labels.

    To add more granular levels of locality to your nodes or use different locality labels, add custom locality levels as values in the cockroachdb.crdbCluster.localityMappings list. Any custom localityMappings configuration overrides the default region and zone configuration, so if you append an additional locality level but wish to keep the region and zone labels you must declare them manually.

    The following example uses the existing region and zone labels and adds an additional datacenter locality mapping that is more granular than zone. This example declares that the dc locality information is stored in the example.datacenter.locality node label:

    cockroachdb:
      crdbCluster:
        localityMappings:
          - nodeLabel: "topology.kubernetes.io/region"
            localityLabel: "region"
          - nodeLabel: "topology.kubernetes.io/zone"
            localityLabel: "zone"
          - nodeLabel: "example.datacenter.locality"
            localityLabel: "dc"
    

    The list of localityMappings is processed in a top-down hierarchy, where each entry is processed as a lower locality level than the previous locality. In this example, if a Kubernetes node is initialized in the us-central1 region, us-central1-c zone, and dc2 datacenter, its cockroach start --locality flag would be equivalent to the following:

    cockroach start --locality region=us-central1,zone=us-central1-c,dc=dc2
    

    Optionally, review the cockroachdb.crdbCluster.topologySpreadConstraints configuration and set topologyKey to the nodeLabel value of a locality level that has distinct values for each node. By default the lowest locality level is zone, so the following configuration sets that value as the topologyKey:

    cockroachdb:
      crdbCluster:
        topologySpreadConstraints:
          topologyKey: topology.kubernetes.io/zone
    

    For more information on localities and topology planning, see the topology patterns documentation.

  6. Modify other relevant parts of the configuration such as other topologySpreadConstraints fields, service.ports, and others as needed for your configuration.

  7. Run the following command to install the CockroachDB chart using Helm:

    icon/buttons/copy
    helm install $CRDBCLUSTER ./cockroachdb-parent/charts/cockroachdb -n $NAMESPACE
    

    You can override the default parameters using the --set key=value[,key=value] argument while installing the chart:

    icon/buttons/copy
    helm install $CRDBCLUSTER  ./cockroachdb-parent/charts/cockroachdb --set clusterDomain=cluster-test.local -n $NAMESPACE
    

Deploy across multiple regions

The Helm chart supports specifying multiple region definitions in cockroachdb.crdbCluster.regions with their respective node counts. You must ensure the required networking is set up to allow for service discovery across regions. Also, ensure that the same CA cert is used across all the regions.

For each region, modify the regions configuration as described in Initialize the cluster and perform helm install against the respective Kubernetes cluster. While applying the installation in a given region, do the following:

  • Verify that the domain matches cockroachdb.clusterDomain in the values file.
  • Ensure that cockroachdb.crdbCluster.regions captures the information for regions that have already been deployed, including the current region. This allows CockroachDB in the current region to connect to clusters deployed in the existing regions.

The following example shows a configuration across two regions, us-central1 and us-east1, with 3 nodes in each cluster:

cockroachdb:
  clusterDomain: cluster.gke.gcp-us-east1
  crdbCluster:
    regions:
      - code: us-central1
        nodes: 3
        cloudProvider: gcp
        domain: cluster.gke.gcp-us-central1
        namespace: cockroach-ns
      - code: us-east1
        nodes: 3
        cloudProvider: gcp
        domain: cluster.gke.gcp-us-east1
        namespace: cockroach-ns

Step 3. Use the built-in SQL client

To use the CockroachDB SQL client, follow these steps to launch a secure pod running the cockroach binary.

  1. Download the secure client Kubernetes application:

    icon/buttons/copy
    curl -O https://raw.githubusercontent.com/cockroachdb/helm-charts/master/examples/client-secure.yaml
    
    Warning:

    This client tool logs into CockroachDB as root using the root certificates.

  2. Edit the yaml file with the following values:

    • spec.serviceAccountName: my-release-cockroachdb
    • spec.image: cockroachdb/cockroach:
    • spec.volumes[0].project.sources[0].secret.name: my-release-cockroachdb-client-secret
  3. Launch a pod using this file and keep it running indefinitely:

    icon/buttons/copy
    kubectl create -f client-secure.yaml
    
  4. Get a shell into the pod and start the CockroachDB built-in SQL client:

    icon/buttons/copy
    kubectl exec -it cockroachdb-client-secure \
    -- ./cockroach sql \
    --certs-dir=/cockroach/cockroach-certs \
    --host=cockroachdb-public
    
    # Welcome to the CockroachDB SQL shell.
    # All statements must be terminated by a semicolon.
    # To exit, type: \q.
    #
    # Server version: CockroachDB CCL v21.1.0 (x86_64-unknown-linux-gnu, built 2021/04/23 13:54:57, go1.13.14) (same version as client)
    # Cluster ID: a96791d9-998c-4683-a3d3-edbf425bbf11
    #
    # Enter \? for a brief introduction.
    #
    root@cockroachdb-public:26257/defaultdb>
    

    This pod will continue running indefinitely, so any time you need to reopen the built-in SQL client or run any other cockroach client commands (e.g., cockroach node), repeat this step using the appropriate cockroach command. If you'd prefer to delete the pod and recreate it when needed, run kubectl delete pod cockroachdb-client-secure.

  5. Run some basic CockroachDB SQL statements:

    CREATE DATABASE bank;
    CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL);
    INSERT INTO bank.accounts VALUES (1, 1000.50);
    SELECT * FROM bank.accounts;
      id | balance
    +----+---------+
      1 | 1000.50
    (1 row)
    
  6. Create a user with a password:

    CREATE USER roach WITH PASSWORD 'Q7gc8rEdS';
    

    You will need this username and password to access the DB Console later.

  7. Exit the SQL shell and pod:

  \q

Step 4. Access the DB Console

To access the cluster's DB Console:

  1. On secure clusters, certain pages of the DB Console can only be accessed by admin users.

    Get a shell into the pod and start the CockroachDB built-in SQL client:

    icon/buttons/copy
    kubectl exec -it cockroachdb-client-secure \
    -- ./cockroach sql \
    --certs-dir=/cockroach/cockroach-certs \
    --host=cockroachdb-public
    
  2. Assign roach to the admin role (you only need to do this once):

    GRANT admin TO roach;
    
  3. Exit the SQL shell and pod:

    \q
    
  4. In a new terminal window, port-forward from your local machine to the cockroachdb-public service:

    icon/buttons/copy
    kubectl port-forward service/cockroachdb-public 8080
    
    Forwarding from 127.0.0.1:8080 -> 8080
    

    Run the port-forward command on the same machine as the web browser in which you want to view the DB Console. If you have been running these commands from a cloud instance or other non-local shell, you will not be able to view the UI without configuring kubectl locally and running the preceding port-forward command on your local machine.

  5. Go to https://localhost:8080 and log in with the username and password you created earlier.

    Note:

    If you are using Google Chrome, and get an error about not being able to reach localhost because its certificate has been revoked, go to chrome://flags/#allow-insecure-localhost, enable "Allow invalid certificates for resources loaded from localhost", and then restart the browser. This degrades security for all sites running on localhost, not just CockroachDB's DB Console, so enable the feature only temporarily.

  6. In the DB Console, verify that the cluster is running as expected:

    1. View the Node List to ensure that all nodes successfully joined the cluster.
    2. Click the Databases tab on the left to verify that bank is listed.

Next steps

Read the following pages for detailed information on cluster scaling, certificate management, resource management, best practices, and other cluster operation details:

Examples

Authenticate with cockroach cert

The following example uses cockroach cert commands to generate and sign the CockroachDB node and client certificates. To learn more about the supported methods of signing certificates, refer to Authentication.

  1. Create two directories:

    icon/buttons/copy
    mkdir certs my-safe-directory
    
  2. Create the CA certificate and key pair:

    icon/buttons/copy
    cockroach cert create-ca \
      --certs-dir=certs \
      --ca-key=my-safe-directory/ca.key
    
  3. Create a client certificate and key pair for the root user:

    icon/buttons/copy
    cockroach cert create-client root \
      --certs-dir=certs \
      --ca-key=my-safe-directory/ca.key
    
  4. Upload the client certificate and key to the Kubernetes cluster as a secret, renaming them to the filenames required by the CockroachDB operator:

    icon/buttons/copy
    kubectl create secret generic cockroachdb.client.root \
      --from-file=tls.key=certs/client.root.key \
      --from-file=tls.crt=certs/client.root.crt \
      --from-file=ca.crt=certs/ca.crt
    
    secret/cockroachdb.client.root created
    
  5. Create the certificate and key pair for your CockroachDB nodes, specifying the namespace you used when deploying the cluster. This example uses the cockroach-ns namespace:

    icon/buttons/copy
    cockroach cert create-node localhost \
      127.0.0.1 \
      cockroachdb-public \
      cockroachdb-public.cockroach-ns \
      cockroachdb-public.cockroach-ns.svc.cluster.local \
      *.cockroachdb \
      *.cockroachdb.cockroach-ns \
      *.cockroachdb.cockroach-ns.svc.cluster.local \
      --certs-dir=certs \
      --ca-key=my-safe-directory/ca.key
    
  6. Upload the node certificate and key to the Kubernetes cluster as a secret, renaming them to the filenames required by the CockroachDB operator:

    icon/buttons/copy
    kubectl create secret generic cockroachdb.node \
      --from-file=tls.key=certs/node.key \
      --from-file=tls.crt=certs/node.crt \
      --from-file=ca.crt=certs/ca.crt
    
    secret/cockroachdb.node created
    
  7. Check that the secrets were created on the cluster:

    icon/buttons/copy
    kubectl get secrets
    
    NAME                      TYPE                                   DATA   AGE
    cockroachdb.client.root   Opaque                                   3    13s
    cockroachdb.node          Opaque                                   3     3s
    default-token-6js7b       kubernetes.io/service-account-token      3     9h
    
  8. Add cockroachdb.tls.externalCertificates.certificates.nodeSecretName and cockroachdb.tls.externalCertificates.certificates.nodeClientSecretName to the values file used to deploy the cluster:

    cockroachdb:
    tls:
      enabled: true
      externalCertificates:
        enabled: true
        certificates:
          nodeSecretName: cockroachdb.node
          nodeClientSecretName: cockroachdb.client.root
    
×