This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Kubernetes Install Guide

How to install Kubernetes in different environments

Kubernetes and Zookeeper form a major backbone in running the Altinity Kubernetes Operator in a cluster. The following guides detail how to setup Kubernetes in different environments.

1 - Install minikube for Linux

How to install Kubernetes through minikube

One popular option for installing Kubernetes is through minikube, which creates a local Kubernetes cluster for different environments. Tests scripts and examples for the clickhouse-operator are based on using minikube to set up the Kubernetes environment.

The following guide demonstrates how to install minikube that support the clickhouse-operator for the following operating systems:

  • Linux (Deb based)

Minikube Installation for Deb Based Linux

The following instructions assume an installation for x86-64 based Linux that use Deb package installation. Please see the referenced documentation for instructions for other Linux distributions and platforms.

To install minikube that supports running clickhouse-operator:

kubectl Installation for Deb

The following instructions are based on Install and Set Up kubectl on Linux

  1. Download the kubectl binary:

    curl -LO 'https://dl.k8s.io/release/v1.22.0/bin/linux/amd64/kubectl'
    
  2. Verify the SHA-256 hash:

    curl -LO "https://dl.k8s.io/v1.22.0/bin/linux/amd64/kubectl.sha256"
    
    echo "$(<kubectl.sha256) kubectl" | sha256sum --check
    
  3. Install kubectl into the /usr/local/bin directory (this assumes that your PATH includes use/local/bin):

    sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
    
  4. Verify the installation and the version:

    kubectl version
    

Install Docker for Deb

These instructions are based on Docker’s documentation Install Docker Engine on Ubuntu

  1. Install the Docker repository links.

    1. Update the apt-get repository:

      sudo apt-get update
      
  2. Install the prequisites ca-certificates, curl, gnupg, and lsb-release:

    sudo apt-get install -y ca-certificates curl gnupg lsb-release
    
  3. Add the Docker repository keys:

    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --yes --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
    
    1. Add the Docker repository:

      echo \
      "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
      $(lsb_release -cs) stable" |sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
      
  4. Install Docker:

    1. Update the apt-get repository:

      sudo apt-get update
      
    2. Install Docker and other libraries:

    sudo apt install docker-ce docker-ce-cli containerd.io
    
  5. Add non-root accounts to the docker group. This allows these users to run Docker commands without requiring root access.

    1. Add current user to the docker group and activate the changes to group

      sudo usermod -aG docker $USER&& newgrp docker
      

Install Minikube for Deb

The following instructions are taken from minikube start.

  1. Update the apt-get repository:

    sudo apt-get update
    
  2. Install the prerequisite conntrack:

    sudo apt install conntrack
    
  3. Download minikube:

    curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
    
  4. Install minikube:

    sudo install minikube-linux-amd64 /usr/local/bin/minikube
    
  5. To correct issues with the kube-proxy and the storage-provisioner, set nf_conntrack_max=524288 before starting minikube:

    sudo sysctl net/netfilter/nf_conntrack_max=524288
    
  6. Start minikube:

    minikube start && echo "ok: started minikube successfully"
    
  7. Once installation is complete, verify that the user owns the ~/.kube and ~/.minikube directories:

    sudo chown -R $USER.$USER .kube
    
    sudo chown -R $USER.$USER .minikube
    

2 - Altinity Kubernetes Operator on GKE

How to install the Altinity Kubernetes Operator using Google Kubernetes Engine

Organizations can host their Altinity Kubernetes Operator on the Google Kubernetes Engine (GKE). This can be done either through Altinity.Cloud or through a separate installation on GKE.

To setup a basic Altinity Kubernetes Operator environment, use the following steps. The steps below use the current free Google Cloud services to set up a minimally viable Kubernetes with ClickHouse environment.

Prerequisites

  1. Register a Google Cloud Account: https://cloud.google.com/.
  2. Create a Google Cloud project: https://cloud.google.com/resource-manager/docs/creating-managing-projects
  3. Install gcloud and run gcloud init or gcloud init --console to set up your environment: https://cloud.google.com/sdk/docs/install
  4. Enable the Google Compute Engine: https://cloud.google.com/endpoints/docs/openapi/enable-api
  5. Enable GKE on your project: https://console.cloud.google.com/apis/enableflow?apiid=container.googleapis.com.
  6. Select a default Computer Engine zone.
  7. Select a default Compute Engine region.
  8. Install kubectl on your local system. For sample instructions, see the Minikube on Linux installation instructions.

Altinity Kubernetes Operator on GKE Installation instructions

Installing the Altinity Kubernetes Operator in GKE has the following main steps:

Create the Network

The first step in setting up the Altinity Kubernetes Operator in GKE is creating the network. The complete details can be found on the Google Cloud documentation site regarding the gcloud compute networks create command. The following command will create a network called kubernetes-1 that will work for our minimal Altinity Kubernetes Operator cluster. Note that this network will not be available to external networks unless additional steps are made. Consult the Google Cloud documentation site for more details.

  1. See a list of current networks available. In this example, there are no networks setup in this project:

    gcloud compute networks list
    NAME     SUBNET_MODE  BGP_ROUTING_MODE  IPV4_RANGE  GATEWAY_IPV4
    default  AUTO         REGIONAL
    
  2. Create the network in your Google Cloud project:

    gcloud compute networks create kubernetes-1 --bgp-routing-mode regional --subnet-mode custom
    Created [https://www.googleapis.com/compute/v1/projects/betadocumentation/global/networks/kubernetes-1].
    NAME          SUBNET_MODE  BGP_ROUTING_MODE  IPV4_RANGE  GATEWAY_IPV4
    kubernetes-1  CUSTOM       REGIONAL
    
    Instances on this network will not be reachable until firewall rules
    are created. As an example, you can allow all internal traffic between
    instances as well as SSH, RDP, and ICMP by running:
    
    $ gcloud compute firewall-rules create <FIREWALL_NAME> --network kubernetes-1 --allow tcp,udp,icmp --source-ranges <IP_RANGE>
    $ gcloud compute firewall-rules create <FIREWALL_NAME> --network kubernetes-1 --allow tcp:22,tcp:3389,icmp
    
  3. Verify its creation:

    gcloud compute networks list
    NAME          SUBNET_MODE  BGP_ROUTING_MODE  IPV4_RANGE  GATEWAY_IPV4
    default       AUTO         REGIONAL
    kubernetes-1  CUSTOM       REGIONAL
    

Create the Cluster

Now that the network has been created, we can set up our cluster. The following cluster will be one using the ec2-micro machine type - the is still in the free model, and gives just enough power to run our basic cluster. The cluster will be called cluster-1, but you can replace it whatever name you feel appropriate. It uses the kubernetes-1 network specified earlier and creates a new subnet for the cluster under k-subnet-1.

To create and launch the cluster:

  1. Verify the existing clusters with the gcloud command. For this example there are no pre-existing clusters.

    gcloud container clusters list
    
  2. From the command line, issue the following gcloud command to create the cluster:

    gcloud container clusters create cluster-1 --region us-west1 --node-locations us-west1-a --machine-type e2-micro --network kubernetes-1 --create-subnetwork name=k-subnet-1 --enable-ip-alias &
    
  3. Use the clusters list command to verify when the cluster is available for use:

    gcloud container clusters list
    Created [https://container.googleapis.com/v1/projects/betadocumentation/zones/us-west1/clusters/cluster-1].
    To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-west1/cluster-1?project=betadocumentation
    kubeconfig entry generated for cluster-1.
    NAME       LOCATION  MASTER_VERSION   MASTER_IP      MACHINE_TYPE  NODE_VERSION     NUM_NODES  STATUS
    cluster-1  us-west1  1.21.6-gke.1500  35.233.231.36  e2-micro      1.21.6-gke.1500  3          RUNNING
    NAME       LOCATION  MASTER_VERSION   MASTER_IP      MACHINE_TYPE  NODE_VERSION     NUM_NODES  STATUS
    cluster-1  us-west1  1.21.6-gke.1500  35.233.231.36  e2-micro      1.21.6-gke.1500  3          RUNNING
    [1]+  Done                    gcloud container clusters create cluster-1 --region us-west1 --node-locations us-west1-a --machine-type e2-micro --network kubernetes-1 --create-subnetwork name=k-subnet-1 --enable-ip-alias
    

Get Cluster Credentials

Importing the cluster credentials into your kubectl environment will allow you to issue commands directly to the cluster on Google Cloud. To import the cluster credentials:

  1. Retrieve the credentials for the newly created cluster:

    gcloud container clusters get-credentials cluster-1 --region us-west1 --project betadocumentation
    Fetching cluster endpoint and auth data.
    kubeconfig entry generated for cluster-1.
    
  2. Verify the cluster information from the kubectl environment:

    kubectl cluster-info
    Kubernetes control plane is running at https://35.233.231.36
    GLBCDefaultBackend is running at https://35.233.231.36/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
    KubeDNS is running at https://35.233.231.36/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    Metrics-server is running at https://35.233.231.36/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
    
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    

Install the Altinity ClickHouse Operator

Our cluster is up and ready to go. Time to install the Altinity Kubernetes Operator through the following steps. Note that we are specifying the version of the Altinity Kubernetes Operator to install. This insures maximum compatibility with your applications and other Kubernetes environments.

As of the time of this article, the most current version is 0.18.1

  1. Apply the Altinity Kubernetes Operator manifest by either downloading it and applying it, or referring to the GitHub repository URL. For more information, see the Altinity Kubernetes Operator Installation Guides.

    kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/0.18.1/deploy/operator/clickhouse-operator-install-bundle.yaml
    
  2. Verify the installation by running:

    kubectl get pods --namespace kube-system
    NAME                                                  READY   STATUS    RESTARTS   AGE
    clickhouse-operator-77b54889b4-g98kk                  2/2     Running   0          53s
    event-exporter-gke-5479fd58c8-7h6bn                   2/2     Running   0          108s
    fluentbit-gke-b29c2                                   2/2     Running   0          79s
    fluentbit-gke-k8f2n                                   2/2     Running   0          80s
    fluentbit-gke-vjlqh                                   2/2     Running   0          80s
    gke-metrics-agent-4ttdt                               1/1     Running   0          79s
    gke-metrics-agent-qf24p                               1/1     Running   0          80s
    gke-metrics-agent-szktc                               1/1     Running   0          80s
    konnectivity-agent-564f9f6c5f-59nls                   1/1     Running   0          40s
    konnectivity-agent-564f9f6c5f-9nfnl                   1/1     Running   0          40s
    konnectivity-agent-564f9f6c5f-vk7l8                   1/1     Running   0          97s
    konnectivity-agent-autoscaler-5c49cb58bb-zxzlp        1/1     Running   0          97s
    kube-dns-697dc8fc8b-ddgrx                             4/4     Running   0          98s
    kube-dns-697dc8fc8b-fpnps                             4/4     Running   0          71s
    kube-dns-autoscaler-844c9d9448-pqvqr                  1/1     Running   0          98s
    kube-proxy-gke-cluster-1-default-pool-fd104f22-8rx3   1/1     Running   0          36s
    kube-proxy-gke-cluster-1-default-pool-fd104f22-gnd0   1/1     Running   0          29s
    kube-proxy-gke-cluster-1-default-pool-fd104f22-k2sv   1/1     Running   0          12s
    l7-default-backend-69fb9fd9f9-hk7jq                   1/1     Running   0          107s
    metrics-server-v0.4.4-857776bc9c-bs6sl                2/2     Running   0          44s
    pdcsi-node-5l9vf                                      2/2     Running   0          79s
    pdcsi-node-gfwln                                      2/2     Running   0          79s
    pdcsi-node-q6scz                                      2/2     Running   0          80s
    

Create a Simple ClickHouse Cluster

The Altinity Kubernetes Operator allows the easy creation and modification of ClickHouse clusters in whatever format that works best for your organization. Now that the Google Cloud cluster is running and has the Altinity Kubernetes Operatorinstalled, let’s create a very simple ClickHouse cluster to test on.

The following example will create an Altinity Kubernetes Operator controlled cluster with 1 shard and replica, 500 MB of persistent storage, and sets the password of the default Altinity Kubernetes Operator user’s password to topsecret. For more information on customizing the Altinity Kubernetes Operator, see the Altinity Kubernetes Operator Configuration Guides.

  1. Create the following manifest and save it as gcp-example01.yaml.

    
    apiVersion: "clickhouse.altinity.com/v1"
    kind: "ClickHouseInstallation"
    metadata:
    name: "gcp-example"
    spec:
    configuration:
        # What does my cluster look like?
        clusters:
        - name: "gcp-example"
        layout:
            shardsCount: 1
            replicasCount: 1
        templates:
            podTemplate: clickhouse-stable
            volumeClaimTemplate: pd-ssd
        # Where is Zookeeper?
        zookeeper:
        nodes:
        - host: zookeeper.zoo1ns
            port: 2181
        # What are my users?
        users:
        # Password = topsecret
        demo/password_sha256_hex: 53336a676c64c1396553b2b7c92f38126768827c93b64d9142069c10eda7a721
        demo/profile: default
        demo/quota: default
        demo/networks/ip:
        - 0.0.0.0/0
        - ::/0
    templates:
        podTemplates:
        # What is the definition of my server?
        - name: clickhouse-stable
        spec:
            containers:
            - name: clickhouse
            image: altinity/clickhouse-server:21.8.10.1.altinitystable
        # Keep servers on separate nodes!
            podDistribution:
            - scope: ClickHouseInstallation
            type: ClickHouseAntiAffinity
        volumeClaimTemplates:
        # How much storage and which type on each node?
        - name: pd-ssd
        # Do not delete PVC if installation is dropped.
        reclaimPolicy: Retain
        spec:
            accessModes:
            - ReadWriteOnce
            resources:
            requests:
                storage: 500Mi
    
  2. Create a namespace in your GKE environment. For this example, we will be using test:

    kubectl create namespace test
    namespace/test created
    
  3. Apply the manifest to the namespace:

    kubectl -n test apply -f gcp-example01.yaml
    clickhouseinstallation.clickhouse.altinity.com/gcp-example created
    
  4. Verify the installation is complete when all pods are in a Running state:

    kubectl -n test get chi -o wide
    NAME          VERSION   CLUSTERS   SHARDS   HOSTS   TASKID                                 STATUS      UPDATED   ADDED   DELETED   DELETE   ENDPOINT
    gcp-example   0.18.1    1          1        1       f859e396-e2de-47fd-8016-46ad6b0b8508   Completed             1                          clickhouse-gcp-example.test.svc.cluster.local
    

Login to the Cluster

This example does not have any open external ports, but we can still access our ClickHouse database through kubectl exec. In this case, our specific pod we are connecting to is chi-demo-01-demo-01-0-0-0. Replace this with the designation of your pods;

Use the following procedure to verify the Altinity Stable build install in your GKE environment.

  1. Login to the clickhouse-client in one of your existing pods:

    kubectl -n test exec -it chi-gcp-example-gcp-example-0-0-0 -- clickhouse-client
    
  2. Verify the cluster configuration:

    kubectl -n test exec -it chi-gcp-example-gcp-example-0-0-0  -- clickhouse-client -q "SELECT * FROM system.clusters  FORMAT PrettyCompactNoEscapes"
    ┌─cluster──────────────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name───────────────────────┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐
    │ all-replicated                               │         111 │ chi-gcp-example-gcp-example-0-0 │ 127.0.0.1    │ 90001 │ default │                  │            000│ all-sharded                                  │         111 │ chi-gcp-example-gcp-example-0-0 │ 127.0.0.1    │ 90001 │ default │                  │            000│ gcp-example                                  │         111 │ chi-gcp-example-gcp-example-0-0 │ 127.0.0.1    │ 90001 │ default │                  │            000│ test_cluster_two_shards                      │         111 │ 127.0.0.1                       │ 127.0.0.1    │ 90001 │ default │                  │            000│ test_cluster_two_shards                      │         211 │ 127.0.0.2                       │ 127.0.0.2    │ 90000 │ default │                  │            000│ test_cluster_two_shards_internal_replication │         111 │ 127.0.0.1                       │ 127.0.0.1    │ 90001 │ default │                  │            000│ test_cluster_two_shards_internal_replication │         211 │ 127.0.0.2                       │ 127.0.0.2    │ 90000 │ default │                  │            000│ test_cluster_two_shards_localhost            │         111 │ localhost                       │ 127.0.0.1    │ 90001 │ default │                  │            000│ test_cluster_two_shards_localhost            │         211 │ localhost                       │ 127.0.0.1    │ 90001 │ default │                  │            000│ test_shard_localhost                         │         111 │ localhost                       │ 127.0.0.1    │ 90001 │ default │                  │            000│ test_shard_localhost_secure                  │         111 │ localhost                       │ 127.0.0.1    │ 94400 │ default │                  │            000│ test_unavailable_shard                       │         111 │ localhost                       │ 127.0.0.1    │ 90001 │ default │                  │            000│ test_unavailable_shard                       │         211 │ localhost                       │ 127.0.0.1    │    10 │ default │                  │            000└──────────────────────────────────────────────┴───────────┴──────────────┴─────────────┴─────────────────────────────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘
    
  3. Exit out of your cluster:

    chi-gcp-example-gcp-example-0-0-0.chi-gcp-example-gcp-example-0-0.test.svc.cluster.local :) exit
    Bye.
    

Further Steps

This simple example demonstrates how to build and manage an Altinity Altinity Kubernetes Operator run cluster for ClickHouse. Further steps would be to open the cluster to external network connections, setup replication schemes, etc.

For more information, see the Altinity Kubernetes Operator guides and the Altinity Kubernetes Operator repository.