First Clusters

Create your first ClickHouse Cluster

If you followed the Quick Installation guide, then you have the Altinity Kubernetes Operator for Kubernetes installed. Let’s give it something to work with.

Create Your Namespace

For our examples, we’ll be setting up our own Kubernetes namespace test. All of the examples will be installed into that namespace so we can track how the cluster is modified with new updates.

Create the namespace with the following kubectl command:

kubectl create namespace test
namespace/test created

Just to make sure we’re in a clean environment, let’s check for any resources in our namespace:

kubectl get all -n test
No resources found in test namespace.

The First Cluster

We’ll start with the simplest cluster: one shard, one replica. This template and others are available on the Altinity Kubernetes Operator Github example site, and contains the following:

apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
  name: "demo-01"
spec:
  configuration:
    clusters:
      - name: "demo-01"
        layout:
          shardsCount: 1
          replicasCount: 1

Save this as sample01.yaml and launch it with the following:

kubectl apply -n test -f sample01.yaml
clickhouseinstallation.clickhouse.altinity.com/demo-01 created

Verify that the new cluster is running. When the status is Running then it’s complete.

kubectl -n test get chi -o wide
NAME      VERSION   CLUSTERS   SHARDS   HOSTS   TASKID                                 STATUS      UPDATED   ADDED   DELETED   DELETE   ENDPOINT
demo-01   0.18.1    1          1        1       6d1d2c3d-90e5-4110-81ab-8863b0d1ac47   Completed             1                          clickhouse-demo-01.test.svc.cluster.local

To retrieve the IP information use the get service option:

kubectl get service -n test
NAME                      TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                         AGE
chi-demo-01-demo-01-0-0   ClusterIP      None           <none>        8123/TCP,9000/TCP,9009/TCP      2s
clickhouse-demo-01        LoadBalancer   10.111.27.86   <pending>     8123:31126/TCP,9000:32460/TCP   19s

So we can see our pods is running, and that we have the" load balancer for the cluster.

Connect To Your Cluster With Exec

Let’s talk to our cluster and run some simple ClickHouse queries.

We can hop in directly through Kubernetes and run the clickhouse-client that’s part of the image with the following command:

kubectl -n test exec -it chi-demo-01-demo-01-0-0-0 -- clickhouse-client
ClickHouse client version 20.12.4.5 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 20.12.4 revision 54442.

chi-demo-01-demo-01-0-0-0.chi-demo-01-demo-01-0-0.test.svc.cluster.local :)

From within ClickHouse, we can check out the current clusters:

SELECT * FROM system.clusters
┌─cluster─────────────────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name───────────────┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐
 all-replicated                                           1             1            1  chi-demo-01-demo-01-0-0  127.0.0.1     9000         1  default                               0                0                        0 
 all-sharded                                              1             1            1  chi-demo-01-demo-01-0-0  127.0.0.1     9000         1  default                               0                0                        0 
 demo-01                                                  1             1            1  chi-demo-01-demo-01-0-0  127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_one_shard_three_replicas_localhost          1             1            1  127.0.0.1                127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_one_shard_three_replicas_localhost          1             1            2  127.0.0.2                127.0.0.2     9000         0  default                               0                0                        0 
 test_cluster_one_shard_three_replicas_localhost          1             1            3  127.0.0.3                127.0.0.3     9000         0  default                               0                0                        0 
 test_cluster_two_shards                                  1             1            1  127.0.0.1                127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_two_shards                                  2             1            1  127.0.0.2                127.0.0.2     9000         0  default                               0                0                        0 
 test_cluster_two_shards_internal_replication             1             1            1  127.0.0.1                127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_two_shards_internal_replication             2             1            1  127.0.0.2                127.0.0.2     9000         0  default                               0                0                        0 
 test_cluster_two_shards_localhost                        1             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_two_shards_localhost                        2             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_shard_localhost                                     1             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_shard_localhost_secure                              1             1            1  localhost                127.0.0.1     9440         0  default                               0                0                        0 
 test_unavailable_shard                                   1             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_unavailable_shard                                   2             1            1  localhost                127.0.0.1        1         0  default                               0                0                        0 
└─────────────────────────────────────────────────┴───────────┴──────────────┴─────────────┴─────────────────────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘

Exit out of your cluster:

chi-demo-01-demo-01-0-0-0.chi-demo-01-demo-01-0-0.test.svc.cluster.local :) exit
Bye.

Connect to Your Cluster with Remote Client

You can also use a remote client such as clickhouse-client to connect to your cluster through the LoadBalancer.

  • The default username and password is set by the clickhouse-operator-install.yaml file. These values can be altered by changing the chUsername and chPassword ClickHouse Credentials section:

    • Default Username: clickhouse_operator
    • Default Password: clickhouse_operator_password
# ClickHouse credentials (username, password and port) to be used
# by operator to connect to ClickHouse instances for:
# 1. Metrics requests
# 2. Schema maintenance
# 3. DROP DNS CACHE
# User with such credentials can be specified in additional ClickHouse
# .xml config files,
# located in `chUsersConfigsPath` folder
chUsername: clickhouse_operator
chPassword: clickhouse_operator_password
chPort: 8123

In either case, the command to connect to your new cluster will resemble the following, replacing {LoadBalancer hostname} with the name or IP address or your LoadBalancer, then providing the proper password. In our examples so far, that’s been localhost.

From there, just make your ClickHouse SQL queries as you please - but remember that this particular cluster has no persistent storage. If the cluster is modified in any way, any databases or tables created will be wiped clean.

Update Your First Cluster To 2 Shards

Well that’s great - we have a cluster running. Granted, it’s really small and doesn’t do much, but what if we want to upgrade it?

Sure - we can do that any time we want.

Take your sample01.yaml and save it as sample02.yaml.

Let’s update it to give us two shards running with one replica:

apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
  name: "demo-01"
spec:
  configuration:
    clusters:
      - name: "demo-01"
        layout:
          shardsCount: 2
          replicasCount: 1

Save your YAML file, and apply it. We’ve defined the name in the metadata, so it knows exactly what cluster to update.

kubectl apply -n test -f sample02.yaml
clickhouseinstallation.clickhouse.altinity.com/demo-01 configured

Verify that the cluster is running - this may take a few minutes depending on your system:

kubectl -n test get chi -o wide
NAME      VERSION   CLUSTERS   SHARDS   HOSTS   TASKID                                 STATUS      UPDATED   ADDED   DELETED   DELETE   ENDPOINT
demo-01   0.18.1    1          2        2       80102179-4aa5-4e8f-826c-1ca7a1e0f7b9   Completed             1                          clickhouse-demo-01.test.svc.cluster.local
kubectl get service -n test
NAME                      TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                         AGE
chi-demo-01-demo-01-0-0   ClusterIP      None           <none>        8123/TCP,9000/TCP,9009/TCP      26s
chi-demo-01-demo-01-1-0   ClusterIP      None           <none>        8123/TCP,9000/TCP,9009/TCP      3s
clickhouse-demo-01        LoadBalancer   10.111.27.86   <pending>     8123:31126/TCP,9000:32460/TCP   43s

Once again, we can reach right into our cluster with clickhouse-client and look at the clusters.

clickhouse-client --host localhost --user=clickhouse_operator --password=clickhouse_operator_password
ClickHouse client version 20.12.4.5 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 20.12.4 revision 54442.

chi-demo-01-demo-01-1-0-0.chi-demo-01-demo-01-1-0.test.svc.cluster.local :)
SELECT * FROM system.clusters
┌─cluster─────────────────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name───────────────┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐
 all-replicated                                           1             1            1  chi-demo-01-demo-01-0-0  127.0.0.1     9000         1  default                               0                0                        0 
 all-sharded                                              1             1            1  chi-demo-01-demo-01-0-0  127.0.0.1     9000         1  default                               0                0                        0 
 demo-01                                                  1             1            1  chi-demo-01-demo-01-0-0  127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_one_shard_three_replicas_localhost          1             1            1  127.0.0.1                127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_one_shard_three_replicas_localhost          1             1            2  127.0.0.2                127.0.0.2     9000         0  default                               0                0                        0 
 test_cluster_one_shard_three_replicas_localhost          1             1            3  127.0.0.3                127.0.0.3     9000         0  default                               0                0                        0 
 test_cluster_two_shards                                  1             1            1  127.0.0.1                127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_two_shards                                  2             1            1  127.0.0.2                127.0.0.2     9000         0  default                               0                0                        0 
 test_cluster_two_shards_internal_replication             1             1            1  127.0.0.1                127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_two_shards_internal_replication             2             1            1  127.0.0.2                127.0.0.2     9000         0  default                               0                0                        0 
 test_cluster_two_shards_localhost                        1             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_cluster_two_shards_localhost                        2             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_shard_localhost                                     1             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_shard_localhost_secure                              1             1            1  localhost                127.0.0.1     9440         0  default                               0                0                        0 
 test_unavailable_shard                                   1             1            1  localhost                127.0.0.1     9000         1  default                               0                0                        0 
 test_unavailable_shard                                   2             1            1  localhost                127.0.0.1        1         0  default                               0                0                        0 
└─────────────────────────────────────────────────┴───────────┴──────────────┴─────────────┴─────────────────────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘

So far, so good. We can create some basic clusters. If we want to do more, we’ll have to move ahead with replication and zookeeper in the next section.