Appendix: Using Altinity.Cloud Anywhere with minikube
This guide covers setting up minikube so that you can use Altinity.Cloud Anywhere to provision ClickHouse clusters inside minikube. Any computer or cloud instance that can run minikube and support the resource requirements of the Kubernetes cluster we describe here should work.
Note that while minikube is okay to use for development purposes, it should not be used for production. Seriously. We can’t stress that enough. It’s great for development, but don’t use it for production.
Server requirements
In the deployment you’ll do here, you’ll build a minikube cluster with seven nodes. Using the Docker runtime on a MacBook Pro M2 Max, the system provisioned 6 vCPUs and 7.7 GB of RAM per node, along with roughly 60 GB of disk space per node. It’s unlikely all of your nodes will run at capacity, but there’s no guarantee your machine will have enough resources to do whatever you want to do in your minikube cluster. (Did we mention it’s not for production use?) And, of course, the default provisioning may be different on other operating systems, hardware architectures, or virtualization engines.
Before you get started, you’ll need to sign up for an Altinity.Cloud Anywhere trial account. At the end of that process, you’ll have an email with a link to the Altinity Cloud Manager (ACM). You’ll use that link to set up the connection between minikube and Altinity.
Finally, of course, you’ll need to install minikube itself. See the minikube start page for complete install instructions. Just install the software at this point; we’ll talk about how to start minikube in the next section.
Starting minikube
If you’ve used minikube on your machine before, we recommend that you delete its existing configuration:
minikube delete
Now start a minikube cluster with seven nodes:
minikube start --nodes 7 --kubernetes-version=v1.22.8
You’ll see results like this:
😄 minikube v1.30.1 on Darwin 13.5.2 (arm64)
✨ Automatically selected the docker driver. Other choices: qemu2, parallels, ssh
📌 Using Docker Desktop driver with root privileges
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=2, Memory=3500MB) ...
🐳 Preparing Kubernetes v1.22.8 on Docker 23.0.2 ...
❌ Unable to load cached images: loading cached images: stat /Users/dougtidwell/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.22.8: no such file or directory
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring CNI (Container Networking Interface) ...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🔎 Verifying Kubernetes components...
👍 Starting worker node minikube-m02 in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=2, Memory=3500MB) ...
🌐 Found network options:
▪ NO_PROXY=192.168.49.2
🐳 Preparing Kubernetes v1.22.8 on Docker 23.0.2 ...
▪ env NO_PROXY=192.168.49.2
🔎 Verifying Kubernetes components...
. . .
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
NOTE: Defining the environment variable MINIKUBE_IN_STYLE=0
disables
the emojis that appear in front of every minikube message. You’re welcome.
At this point minikube is up and running. The kubectl get nodes
command
shows our seven nodes:
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane,master 4m22s v1.22.8
minikube-m02 Ready <none> 4m2s v1.22.8
minikube-m03 Ready <none> 3m48s v1.22.8
minikube-m04 Ready <none> 3m33s v1.22.8
minikube-m05 Ready <none> 3m17s v1.22.8
minikube-m06 Ready <none> 3m2s v1.22.8
minikube-m07 Ready <none> 2m46s v1.22.8
When using Altinity.Cloud Anywhere with a traditional cloud vendor, there are node types, availability zones, and storage classes. We need to label our minikube nodes to simulate those things. First, run these commands to define the node types and availability zones:
kubectl --context=minikube label nodes minikube \
node.kubernetes.io/instance-type=minikube-node \
topology.kubernetes.io/zone=minikube-zone-a
kubectl --context=minikube label nodes minikube-m02 \
node.kubernetes.io/instance-type=minikube-node \
topology.kubernetes.io/zone=minikube-zone-a
kubectl --context=minikube label nodes minikube-m03 \
node.kubernetes.io/instance-type=minikube-node \
topology.kubernetes.io/zone=minikube-zone-a
kubectl --context=minikube label nodes minikube-m04 \
node.kubernetes.io/instance-type=minikube-node \
topology.kubernetes.io/zone=minikube-zone-b
kubectl --context=minikube label nodes minikube-m05 \
node.kubernetes.io/instance-type=minikube-node \
topology.kubernetes.io/zone=minikube-zone-b
kubectl --context=minikube label nodes minikube-m06 \
node.kubernetes.io/instance-type=minikube-node \
topology.kubernetes.io/zone=minikube-zone-c
kubectl --context=minikube label nodes minikube-m07 \
node.kubernetes.io/instance-type=minikube-node \
topology.kubernetes.io/zone=minikube-zone-c
Now all of our minikube nodes are defined to be of type minikube-node
;
we’ll see that node type again later. We’ve also defined availability
zones named minikube-zone-a
, minikube-zone-b
, and minikube-zone-c
.
On to our
storage classes. We want to use the local-path
storage class instead
of minikube’s default standard
storage class. This command defines
the new storage class:
curl -sSL https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.22/deploy/local-path-storage.yaml | \
sed 's/\/opt\/local-path-provisioner/\/var\/opt\/local-path-provisioner/ ' | \
kubectl --context=minikube apply -f -
Now that we’ve defined the new storage class, we need to tell minikube that
the local-path
class is the default:
kubectl --context=minikube patch storageclass standard \
-p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
kubectl --context=minikube patch storageclass local-path \
-p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Running kubectl get storageclasses
shows the new default class:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 36s
standard k8s.io/minikube-hostpath Delete Immediate false 15m
Connecting Altinity.Cloud Anywhere to minikube
Now that we have the minikube cluster running and configured, it’s time to connect it to Altinity.Cloud Anywhere. That’s the final step for enabling Altinity to provision ClickHouse clusters in minikube.
Step 1. Setting up the tunnel
First we need to set up the TLS tunnel between minikube and Altinity. Click the emailed link you got when you signed up for an Altinity.Cloud Anywhere account. You’ll see this screen:
Make sure the “Provisioned by User” box is selected at the top of the page, and
make sure you’ve installed altinitycloud-connect
from the link beneath it.
Copy and paste the text in the center box at the command line and run it. This
doesn’t generate any output at the command line, but it does create a
cloud-connect.pem
file in the current directory.
Now that you have the cloud-connect.pem
file, run the following command
to set up the TLS tunnel:
altinitycloud-connect kubernetes --url=https://anywhere.altinity.cloud --release=latest-master | kubectl --context=minikube apply -f -
Note: The command you run is different from the one in the text box at the
bottom of Figure 1.
Make sure that the --url
parameter matches the URL in that text box,
as it is dependent on the Altinity.Cloud Anywhere
endpoint you’re using.
The altinitycloud-connect kubernetes
command generates YAML that has configuration
information along with the keys from the .pem
file. That YAML data is
passed to kubectl
.
You’ll see results similar to this:
namespace/altinity-cloud-system created
namespace/altinity-cloud-managed-clickhouse created
clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-view created
clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view created
clusterrole.rbac.authorization.k8s.io/altinity-cloud:storage-class-view created
clusterrole.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view created
clusterrole.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
serviceaccount/cloud-connect created
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-view created
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view created
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:storage-class-view created
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view created
rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
secret/cloud-connect created
deployment.apps/cloud-connect created
Another note: Altinity creates all ClickHouse-related
assets in the altinity-cloud-system
and altinity-cloud-managed-clickhouse
namespaces. You should not create anything in those namespaces yourself.
Click Proceed to go to the next step.
Step 2. Configuring your minikube resources
Next we’ll define aspects of the minikube environment to Altinity. Use the values highlighted in red in Figure 2.
The specific values to use are:
- Cloud Provider: Not Specified (minikube is a special case)
- Region:
minikube-zone
(we defined that with a label earlier) - Number of AZs:
3
- Storage Classes:
local-path
(defined as the defaultstorageclass
earlier) - Node Pools: A single node pool named
minikube-node
with a capacity of2
. The boxes for ClickHouse and Zookeeper must be checked.
Click Proceed to go to the Confirmation screen.
Step 3. Confirming your choices
A JSON description of all of your choices appears in the text box at the top of Figure 3:
You can edit the JSON as needed;
currently you need to change the names of the availability zones. Using
the values specified in the previous step, the generated availability zones will be
minikube-zonea
, minikube-zoneb
, and minikube-zonec
. They should be
minikube-zone-a
, minikube-zone-b
, and minikube-zone-c
.
Once everything is correct, click Finish. This begins the process of creating a ClickHouse cluster inside minikube. You’ll see a status bar similar to Figure 4:
It’s quite likely the status bar will reach the end before everything is configured. Just keep clicking Finish until things are, well, finished:
When things are finished, you’ll see this screen:
With everything up and running, kubectl get pods -n altinity-cloud-managed-clickhouse
shows the pods Altinity.Cloud Anywhere created:
NAME READY STATUS RESTARTS AGE
chi-minikube-ch1-minikube-ch1-0-0-0 2/2 Running 3 (3m12s ago) 4m54s
chi-minikube-ch1-minikube-ch1-0-1-0 2/2 Running 3 (3m23s ago) 4m42s
clickhouse-operator-85c8855c56-qn98x 2/2 Running 0 4m31s
zookeeper-1638-0 1/1 Running 0 4m43s
zookeeper-1638-1 1/1 Running 0 2m54s
zookeeper-1638-2 1/1 Running 0 4m56s
There are two pods for ClickHouse itself, a pod for the Altinity ClickHouse Operator, and three pods for Zookeeper. These pods are managed for you by Altinity.
Working with Altinity.Cloud Anywhere
Now that your environment is configured, you use the Altinity Cloud Manager (ACM) to perform common user and administrative tasks. The steps and tools to manage your ClickHouse clusters are the same for Altinity.Cloud Anywhere and Altinity.Cloud.
Here are some common tasks from the ACM documentation:
- Launching a new ClickHouse cluster
- Creating tables and adding data
- Running an SQL query
- Rescaling a ClickHouse cluster
- Starting or stopping a ClickHouse cluster
The ACM documentation includes:
- A Quick Start guide,
- A General User Guide, and
- An Administrator Guide.
At the command line you can also connect to a running pod and work with ClickHouse directly.