First Clusters
If you followed the Quick Installation guide, then you have the Altinity Kubernetes Operator for Kubernetes installed. Let’s give it something to work with.
Create Your Namespace
For our examples, we’ll be setting up our own Kubernetes namespace test
.
All of the examples will be installed into that namespace so we can track
how the cluster is modified with new updates.
Create the namespace with the following kubectl
command:
Just to make sure we’re in a clean environment, let’s check for any resources in our namespace:
The First Cluster
We’ll start with the simplest cluster: one shard, one replica. This template and others are available on the Altinity Kubernetes Operator Github example site, and contains the following:
Save this as sample01.yaml and launch it with the following:
Verify that the new cluster is running. When the status is
Running
then it’s complete.
To retrieve the IP information use the get service
option:
So we can see our pods is running, and that we have the" load balancer for the cluster.
Connect To Your Cluster With Exec
Let’s talk to our cluster and run some simple ClickHouse queries.
We can hop in directly through Kubernetes and run the clickhouse-client
that’s part of the image with the following command:
From within ClickHouse, we can check out the current clusters:
Exit out of your cluster:
Connect to Your Cluster with Remote Client
You can also use a remote client such as clickhouse-client
to
connect to your cluster through the LoadBalancer.
-
The default username and password is set by the
clickhouse-operator-install.yaml
file. These values can be altered by changing thechUsername
andchPassword
ClickHouse Credentials section:- Default Username:
clickhouse_operator
- Default Password:
clickhouse_operator_password
- Default Username:
In either case, the command to connect to your new cluster will
resemble the following, replacing {LoadBalancer hostname}
with
the name or IP address or your LoadBalancer, then providing
the proper password. In our examples so far, that’s been localhost
.
From there, just make your ClickHouse SQL queries as you please - but remember that this particular cluster has no persistent storage. If the cluster is modified in any way, any databases or tables created will be wiped clean.
Update Your First Cluster To 2 Shards
Well that’s great - we have a cluster running. Granted, it’s really small and doesn’t do much, but what if we want to upgrade it?
Sure - we can do that any time we want.
Take your sample01.yaml
and save it as sample02.yaml
.
Let’s update it to give us two shards running with one replica:
Save your YAML file, and apply it. We’ve defined the name
in the metadata, so it knows exactly what cluster to update.
Verify that the cluster is running - this may take a few minutes depending on your system:
Once again, we can reach right into our cluster with
clickhouse-client
and look at the clusters.
So far, so good. We can create some basic clusters. If we want to do more, we’ll have to move ahead with replication and zookeeper in the next section.