Altinity.Cloud Anywhere uses your Kubernetes infrastructure to host your ClickHouse clusters. Your Kubernetes cluster needs to be set up a certain way; in this section we’ll go over those requirements.
This is the multi-page printable view of this section. Click here to print.
Bring your own Kubernetes (BYOK)
- 1: Kubernetes requirements
- 2: Connecting to Altinity.Cloud Anywhere
- 3: Setting up logging
- 4: Setting up backups
- 5: Disconnecting from Altinity.Cloud Anywhere
- 6: Appendix: Using Altinity.Cloud Anywhere with minikube
1 - Kubernetes requirements
Altinity.Cloud Anywhere operates inside your Kubernetes environment. The general requirements for your Kubernetes environment are:
- Kubernetes version 1.23 or higher in EKS (AWS) or GKE (GCP)
- Every
Node
should have the following labels:node.kubernetes.io/instance-type
kubernetes.io/arch
topology.kubernetes.io/zone
- A
StorageClass
with dynamic provisioning is required LoadBalancer
services must be supported
To get the most from Altinity.Cloud Anywhere features:
- Each
StorageClass
should preferably allow volume expansion - Multiple zones are preferable for high availability
- Autoscaling is preferable for easier vertical scaling
For platform-specific requirements, see the following sections:
AWS requirements
We recommend setting up karpenter or cluster-autoscaler to launch instances in at least 3 Availability Zones.
If you plan on sharing Kubernetes cluster with other workloads, it’s
recommended you label Kubernetes Nodes intended for Altinity.Cloud Anywhere
with altinity.cloud/use=anywhere
and taint them with dedicated=anywhere:NoSchedule
.
Instance types
For Zookeeper infrastructure nodes
t3.large
ort4g.large
*
t4g
instances are AWS Graviton2-based (ARM).
For ClickHouse nodes
ClickHouse works best in AWS when using nodes from these instance families:
m5
m6i
m6g
*
m6g
instances are AWS Graviton2-based (ARM).
Instance sizes from large
to 8xlarge
are typical.
Storage classes
gp2
*gp3-encrypted
*
We recommend using gp3
storage classes that provide more flexibility and performance over gp2
. The gp3
storage classes require the Amazon EBS CSI driver;
that driver is not automatically installed. See
the AWS CSI driver documentation
for details on how to install the driver.
Storage class can be installed with the following manifest:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp3-encrypted
annotations:
storageclass.kubernetes.io/is-default-class: 'true'
provisioner: ebs.csi.aws.com
parameters:
encrypted: 'true'
fsType: ext4
type: gp3
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
The default throughput for gp3
is 125MB/s for any volume size. It can be increased in AWS console or using storage class parameters. Here is an example:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp3-encrypted-500
provisioner: ebs.csi.aws.com
parameters:
encrypted: 'true'
fsType: ext4
throughput: '500'
type: gp3
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
Alternatively, you recommend installing the Altinity EBS parameters controller. That allows you to manage EBS volume throughput dynamically through annotations. This is also integrated to Altinity.Cloud UI (ACM).
GCP requirements
Machine types
For Zookeeper and infrastructure nodes
e2-standard-2
For ClickHouse nodes
It’s recommended to taint node pools with dedicated=clickhouse:NoSchedule
(in addition to altinity.cloud/use=anywhere:NoSchedule
).
n2d-standard-2
n2d-standard-4
n2d-standard-8
n2d-standard-16
n2d-standard-32
If GCP is out of n2d-standard-*
instances in the
region of your choice, we recommend
substituting them with n2-standard-*
.
Storage classes
standard-rwo
premium-rwo
GKE comes pre-configured with both.
2 - Connecting to Altinity.Cloud Anywhere
This tutorial explains how to use Altinity.Cloud Anywhere to deploy ClickHouse clusters using your choice of a third-party Kubernetes cloud provider, or using your own hardware or private company cloud. The Altinity.Cloud Manager (ACM) is used to manage your ClickHouse clusters.
If you’re just getting started, you can get a trial account in three steps:
- Use your business email address to sign up for a free trial on the Altinity.Cloud Anywhere trial page.
NOTE: This must be a business email address. Addresses like*@gmail.com
or*@yahoo.com
are not accepted. - You’ll get an email from Altinity. Follow the instructions to validate your email address.
- The final email you’ll get contains a login link to create a password to log in to the Altinity Cloud Manager.
Connecting Kubernetes
The first time you log in, you will be directed to the environment setup tab shown in Figure 1. If you have an existing account or restart the installation, just select the Environments tab on the left side of your screen to reach the setup page.
Be sure to select “Provisioned by User” as shown in Figure 1.
Connection setup
Highlighted in red in Figure 1 are the steps to complete before you select the PROCEED button.
-
Install the latest version of Altinity.Cloud connect for your system.
-
Copy and paste the connection string at the command line.
altinitycloud-connect login --token=<registration token>
This initiates a TLS handshake that creates a certificate file named cloud-connect.pem
on your machine. There is no output at the command line.
- Run this command to deploy the connector to your Kubernetes cluster.
altinitycloud-connect kubernetes | kubectl apply -f -
The altinitycloud-connect kubernetes
command generates YAML that includes
the .pem
file generated in the previous step. This step may take several
minutes to complete.
The response will be something like this:
namespace/altinity-cloud-system created
namespace/altinity-cloud-managed-clickhouse created
clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:storage-class-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:cloud-connect unchanged
serviceaccount/cloud-connect created
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:storage-class-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view unchanged
rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
secret/cloud-connect created
deployment.apps/cloud-connect created
Notice the altinity-cloud-system
and altinity-cloud-managed-clickhouse
namespaces above. All the resources Altinity.Cloud Anywhere creates are in
those namespaces; you should not create anything in those namespaces
yourself.
Configuring resources
Once these commands have completed select the PROCEED button. After the connection is made, you will advance to the Resources Configuration screen as shown in Figure 2.
At the Resources Configuration screen, set the resources used for ClickHouse clusters as follows.
- Make sure the correct cloud provider is selected. Altinity.Cloud Anywhere should detect this correctly based on the connection you established earlier.
- Add Storage Classes names as needed. These are the block storage classes for your nodes. Use the ADD STORAGE CLASS button to add additional storage classes as needed to allocate block storage for nodes in your environment.
- In the Node Pools section, Inspect the node pool list to ensure the availability zones and pools you wish to use are listed.
- In the Used For section, ClickHouse, Zookeeper, and System must be selected at least once in at least one of the node pools. Selecting multiple node pools for ClickHouse nodes is highly recommended.
- Listed are the Instance Types that are currently in use. Click ADD NODE POOL to add anything that’s missing.
The following Resources Configuration example shows the settings for a Google Cloud Platform environment.
Be aware that you can add more node pools later if needed.
Confirming your settings
The Confirmation screen in Figure 3 displays a JSON representation of the settings you just made. Review these settings; you can edit the JSON directly if needed. When the JSON is correct, select FINISH.
It will take a few minutes for all the resources to be provisioned.
Connection completed
Once the connection is fully set up, the ACM Environments dashboard will display your new environment as shown in Figure 4.
If you have any problems, see the Troubleshooting section below.
Administering Altinity.Cloud Anywhere
Once your environment is configured, you use the Altinity Cloud Manager (ACM) to perform common user and administrative tasks. The steps and tools to manage your ClickHouse clusters are the same for Altinity.Cloud Anywhere and Altinity.Cloud.
Here are some common tasks from the ACM documentation:
- Launching a new ClickHouse cluster
- Running an SQL query
- Rescaling a ClickHouse cluster
- Starting or stopping a ClickHouse cluster
The ACM documentation includes:
- A Quick Start guide,
- A User Guide, and
- An Administrator Guide.
Testing ClickHouse inside a ClickHouse pod
This section shows you how to use your machine to log in to the Clickhouse Cluster you created in Altinity Cloud Manager.
Prerequisite
- clickhouse-client (Installation instructions)
Connection String
The connection string comes from your cluster (Example: test-gcp-anyw) Connection Details link. The Copy/Paste for client connections string highlighted in red in Figure 5 is used in your terminal (you supply the password; Example: adminpassword)
- Find your pod name:
kubectl -n altinity-cloud-managed-clickhouse get all
# Response
NAME READY STATUS RESTARTS AGE
pod/chi-test-anywhere-6-johndoe-anywhere-6-0-0-0 2/2 Running 8 (3h25m ago) 2d17h
- On your command line terminal, login to that pod using the name you got from step 1:
kubectl -n altinity-cloud-managed-clickhouse exec -it pod/chi-test-anywhere-6-johndoe-anywhere-6-0-0-0 -- bash
# Response
Defaulted container "clickhouse-pod" out of: clickhouse-pod, clickhouse-backup
clickhouse@chi-test-anywhere-6-johndoe-anywhere-6-0-0-0:/$
- Login to your ClickHouse database using the clickhouse-client command to get the :) happy face prompt:
clickhouse@chi-test-anywhere-6-johndoe-anywhere-6-0-0-0:/$
clickhouse@chi-test-anywhere-6-johndoe-anywhere-6-0-0-0:/$ clickhouse-client
# Response
<jemalloc>: MADV_DONTNEED does not work (memset will be used instead)
<jemalloc>: (This is the expected behaviour if you are running under QEMU)
ClickHouse client version 22.8.13.21.altinitystable (altinity build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 22.8.13 revision 54460.
test-anywhere-6 :)
- Run a show tables sql command:
test-anywhere-6 :) show tables
# Response
SHOW TABLES
Query id: da01133d-0130-4b98-9090-4ebc6fa4b568
┌─name─────────┐
│ events │
│ events_local │
└──────────────┘
2 rows in set. Elapsed: 0.013 sec.
- Run SQL query to show data in the events table:
test-anywhere-6 :) SELECT * FROM events;
# Response
SELECT *
FROM events
Query id: 00fef876-e9b0-44b1-b768-9e662eda0483
┌─event_date─┬─event_type─┬─article_id─┬─title───┐
│ 2023-03-24 │ 1 │ 13 │ Example │
└────────────┴────────────┴────────────┴─────────┘
1 row in set. Elapsed: 0.023 sec.
test-anywhere-6 :)
Troubleshooting
Q-1. Altinity.Cloud Anywhere endpoint not reachable
Problem
-
By default, the
altinitycloud-connect
command connects to host anywhere.altinity.cloud on port 443. If this host is not reachable, the following error message appears.altinitycloud-connect login --token=<token> Error: Post "https://anywhere.altinity.cloud/sign": dial tcp: lookup anywhere.altinity.cloud on 127.0.0.53:53: no such host
Solution
- Make sure the name is available in DNS and that the resolved IP address is reachable on port 443 (UDP and TCP), then try again.
The
altinitycloud-connect
command has a--url
option if you need to specify a different URL.
Q-2. Insufficient Kubernetes privileges
Problem
- Your Kubernetes account has insufficient permissions.
Solution
-
Look at the output from the
altinitycloud-connect kubernetes | kubectl apply -f -
command to see what actions failed, then adjust the permissions for your Kubernetes account accordingly. At a minimum, set the following permissions:- cluster-admin for initial provisioning only (it can be revoked afterward)
- Give full access to the
altinity-cloud-system
andaltinity-cloud-managed-clickhouse
namespaces - A few optional read-only cluster-level permissions (for observability only)
Q-3. Help! I messed up the resource configuration
Problem
- The resource configuration settings are not correct.
Solution
- From the Environment tab, in the Environment Name column, select the link to your environment.
- Select the menu function ACTIONS 》Reconfigure Anywhere.
- Rerun the Environment 》Connection Setup and enter the correct values.
Q-4 One of my pods won’t spin up
When you reboot your machine, the Anywhere cluster in your ACM has not started.
Problem
One of the pods won’t start. In the listing below,
pod edge-proxy-66d44f7465-lxjjn
in the altinity-cloud-system
namespace has not started:
┌──────────────── Pods(altinity-cloud-system)[8] ──────────────────────────┐
│ NAME↑ PF READY RESTARTS STATUS │
1 │ cloud-connect-d6ff8499f-bkc5k ● 1/1 3 Running │
2 │ crtd-665fd5cb85-wqkkk ● 1/1 3 Running │
3 │ edge-proxy-66d44f7465-lxjjn ● 1/2 7 CrashLoopBackOff │
4 │ grafana-5b466574d-4scjc ● 1/1 1 Running │
5 │ kube-state-metrics-58d86c747c-7hj79 ● 1/1 6 Running │
6 │ node-exporter-762b5 ● 1/1 3 Running │
7 │ prometheus-0 ● 1/1 3 Running │
8 │ statuscheck-f7c9b4d98-2jlt6 ● 1/1 3 Running │
└──────────────────────────────────────────────────────────────────────────┘
Solution
Delete the pod using the kubectl delete pod
command and it will regenerate.
kubectl -n altinity-cloud-system delete pod edge-proxy-66d44f7465-lxjjn
3 - Setting up logging
In order for Altinity.Cloud Anywhere to gather/store/query logs, you need to configure access to an S3 or GCS bucket. Cloud-specific instructions provided below.
EKS (AWS)
The recommended way is to use IRSA.
apiVersion: v1
kind: ServiceAccount
metadata:
name: log-storage
namespace: altinity-cloud-system
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::<aws_account_id>:role/<role_arn>"
Alternatively, you can use a custom Instance Profile or explicit credentials (shown below).
# create bucket
aws s3api create-bucket --bucket REPLACE_WITH_BUCKET_NAME --region REPLACE_WITH_AWS_REGION
# create user with access to the bucket
aws iam create-user --user-name REPLACE_WITH_USER_NAME
aws iam put-user-policy \
--user-name REPLACE_WITH_USER_NAME \
--policy-name REPLACE_WITH_POLICY_NAME \
--policy-document \
'{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket",
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::REPLACE_WITH_BUCKET_NAME",
"arn:aws:s3:::REPLACE_WITH_BUCKET_NAME/*"
],
"Effect": "Allow"
}
]
}'
# generate access key
aws iam create-access-key --user-name REPLACE_WITH_USER_NAME |
jq -r '"AWS_ACCESS_KEY_ID="+(.AccessKey.AccessKeyId)+"\nAWS_SECRET_ACCESS_KEY="+(.AccessKey.SecretAccessKey)+"\n"' > credentials.env
# create altinity-cloud-system/log-storage-aws secret containing AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY
kubectl create secret -n altinity-cloud-system generic log-storage-aws \
--from-env-file=credentials.env
rm -i credentials.env
Use your private customer Slack channel to send the bucket name to Altinity in order to finish configuration.
GKE (GCP)
The recommended way is to use Workload Identity.
apiVersion: v1
kind: ServiceAccount
metadata:
name: log-storage
namespace: altinity-cloud-system
annotations:
iam.gke.io/gcp-service-account: "<gcp_sa_name>@<project_id>.iam.gserviceaccount.com"
Alternatively, you can use GCP service account for instance or explicit credentials (shown below).
# create bucket
gsutil mb gs://REPLACE_WITH_BUCKET_NAME
# create GCP SA with access to the bucket
gcloud iam service-accounts create REPLACE_WITH_GCP_SA_NAME \
--project=REPLACE_WITH_PROJECT_ID \
--display-name "REPLACE_WITH_DISPLAY_NAME"
gsutil iam ch \
serviceAccount:REPLACE_WITH_GCP_SA_NAME@REPLACE_WITH_PROJECT_ID.iam.gserviceaccount.com:roles/storage.admin \
gs://REPLACE_WITH_BUCKET_NAME
# generate GCP SA key
gcloud iam service-accounts keys create credentials.json \
--iam-account=REPLACE_WITH_GCP_SA_NAME@REPLACE_WITH_PROJECT_ID.iam.gserviceaccount.com \
--project=REPLACE_WITH_PROJECT_ID
# create altinity-cloud-system/log-storage-gcp secret containing credentials.json
kubectl create secret -n altinity-cloud-system generic log-storage-gcp \
--from-file=credentials.json
rm -i credentials.json
Use your private customer Slack channel to send the bucket name to Altinity in order to finish configuration.
4 - Setting up backups
In order for Altinity.Cloud Anywhere to work with backups, you need to configure access to an S3 or GCS bucket. Cloud-specific instructions are provided below.
EKS (AWS)
Use a custom Instance Profile or explicit credentials (shown below).
# create bucket
aws s3api create-bucket --bucket REPLACE_WITH_BUCKET_NAME --region REPLACE_WITH_AWS_REGION
# create user with access to the bucket
aws iam create-user --user-name REPLACE_WITH_USER_NAME
aws iam put-user-policy \
--user-name REPLACE_WITH_USER_NAME \
--policy-name REPLACE_WITH_POLICY_NAME \
--policy-document \
'{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts",
"s3:PutObjectTagging"
],
"Resource": [
"arn:aws:s3:::REPLACE_WITH_BUCKET_NAME",
"arn:aws:s3:::REPLACE_WITH_BUCKET_NAME/*"
],
"Effect": "Allow"
}
]
}'
# generate access key
aws iam create-access-key --user-name REPLACE_WITH_USER_NAME |
jq -r '"AWS_ACCESS_KEY_ID="+(.AccessKey.AccessKeyId)+"\nAWS_SECRET_ACCESS_KEY="+(.AccessKey.SecretAccessKey)+"\n"' > credentials.env
# create altinity-cloud-system/clickhouse-backup secret containing AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY
kubectl create secret -n altinity-cloud-system generic clickhouse-backup \
--from-env-file=credentials.env
rm -i credentials.env
Use your private customer Slack channel to send the bucket name to Altinity in order to finish configuration.
GKE (GCP)
Use a GCP service account for the instance or explicit credentials (shown below).
# create bucket
gsutil mb gs://REPLACE_WITH_BUCKET_NAME
# create GCP SA with access to the bucket
gcloud iam service-accounts create REPLACE_WITH_GCP_SA_NAME \
--project=REPLACE_WITH_PROJECT_ID \
--display-name "REPLACE_WITH_DISPLAY_NAME"
gsutil iam ch \
serviceAccount:REPLACE_WITH_GCP_SA_NAME@REPLACE_WITH_PROJECT_ID.iam.gserviceaccount.com:roles/storage.admin \
gs://REPLACE_WITH_BUCKET_NAME
# generate GCP SA key
gcloud iam service-accounts keys create credentials.json \
--iam-account=REPLACE_WITH_GCP_SA_NAME@REPLACE_WITH_PROJECT_ID.iam.gserviceaccount.com \
--project=REPLACE_WITH_PROJECT_ID
# create altinity-cloud-system/clickhouse-backup secret containing GOOGLE_APPLICATION_CREDENTIALS
kubectl create secret -n altinity-cloud-system generic clickhouse-backup \
--from-file=GOOGLE_APPLICATION_CREDENTIALS=credentials.json
rm -i credentials.json
Use your private customer Slack channel to send the bucket name to Altinity in order to finish configuration.
5 - Disconnecting from Altinity.Cloud Anywhere
Even if you disconnect from Altinity.Cloud Anywhere altogether, your ClickHouse cluster can continue running in your Kubernetes environment. In this section we’ll show you how to do that.
Disconnecting your environment from Altinity.Cloud Anywhere
You can disconnect Altinity Cloud Manager from your Kubernetes environment and the ClickHouse clusters running inside it. This does not delete your running ClickHouse clusters, it merely disconnects them from the Altinity Cloud Manager. Your ClickHouse clusters continue running as usual.
This command disconnects your ClickHouse cluster:
altinitycloud-connect kubernetes-disconnect | kubectl delete -f -
After this command completes, Altinity.Cloud Anywhere will no longer be able to see or connect to your Kubernetes environment.
Deleting managed ClickHouse environments in Kubernetes
If you want to delete the ClickHouse clusters in your environment, enter these two commands in the order shown below.
kubectl -n altinity-cloud-managed-clickhouse delete chi --all
altinitycloud-connect kubernetes | kubectl delete -f -
The first command deletes every ClickHouse installation (chi
) that
Altinity.Cloud Anywhere created. Those are in the
altinity-cloud-managed-clickhouse
namespace. With the ClickHouse
clusters deleted, the second command deletes the two Altinity
namespaces and any remaining resources they contain.
WARNING: If you delete the namespaces before deleting the ClickHouse
installations (chi
), the operation will hang due to missing
finalizers on chi
resources.
Should this occur, use the kubectl edit
command on each ClickHouse
installation and remove the finalizer manually from the resource
specification. Here is an example:
kubectl -n altinity-cloud-managed-clickhouse edit clickhouseinstallations.clickhouse.altinity.com/maddie-ch
You can now delete the finalizer from the resource:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: clickhouse.altinity.com/v1
kind: ClickHouseInstallation
metadata:
creationTimestamp: "2023-08-29T17:03:58Z"
finalizers:
- finalizer.clickhouseinstallation.altinity.com
generation: 3
name: maddie-ch
. . .
6 - Appendix: Using Altinity.Cloud Anywhere with minikube
This guide covers setting up minikube so that you can use Altinity.Cloud Anywhere to provision ClickHouse clusters inside minikube. Any computer or cloud instance that can run minikube and support the resource requirements of the Kubernetes cluster we describe here should work.
Note that while minikube is okay to use for development purposes, it should not be used for production. Seriously. We can’t stress that enough. It’s great for development, but don’t use it for production.
Server requirements
In the deployment you’ll do here, you’ll build a minikube cluster with seven nodes. Using the Docker runtime on a MacBook Pro M2 Max, the system provisioned 6 vCPUs and 7.7 GB of RAM per node, along with roughly 60 GB of disk space per node. It’s unlikely all of your nodes will run at capacity, but there’s no guarantee your machine will have enough resources to do whatever you want to do in your minikube cluster. (Did we mention it’s not for production use?) And, of course, the default provisioning may be different on other operating systems, hardware architectures, or virtualization engines.
Before you get started, you’ll need to sign up for an Altinity.Cloud Anywhere trial account. At the end of that process, you’ll have an email with a link to the Altinity Cloud Manager (ACM). You’ll use that link to set up the connection between minikube and Altinity.
Finally, of course, you’ll need to install minikube itself. See the minikube start page for complete install instructions. Just install the software at this point; we’ll talk about how to start minikube in the next section.
Starting minikube
If you’ve used minikube on your machine before, we recommend that you delete its existing configuration:
minikube delete
Now start a minikube cluster with seven nodes:
minikube start --nodes 7 --kubernetes-version=v1.22.8
You’ll see results like this:
😄 minikube v1.30.1 on Darwin 13.5.2 (arm64)
✨ Automatically selected the docker driver. Other choices: qemu2, parallels, ssh
📌 Using Docker Desktop driver with root privileges
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=2, Memory=3500MB) ...
🐳 Preparing Kubernetes v1.22.8 on Docker 23.0.2 ...
❌ Unable to load cached images: loading cached images: stat /Users/dougtidwell/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.22.8: no such file or directory
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring CNI (Container Networking Interface) ...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🔎 Verifying Kubernetes components...
👍 Starting worker node minikube-m02 in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=2, Memory=3500MB) ...
🌐 Found network options:
▪ NO_PROXY=192.168.49.2
🐳 Preparing Kubernetes v1.22.8 on Docker 23.0.2 ...
▪ env NO_PROXY=192.168.49.2
🔎 Verifying Kubernetes components...
. . .
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
NOTE: Defining the environment variable MINIKUBE_IN_STYLE=0
disables
the emojis that appear in front of every minikube message. You’re welcome.
At this point minikube is up and running. The kubectl get nodes
command
shows our seven nodes:
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane,master 4m22s v1.22.8
minikube-m02 Ready <none> 4m2s v1.22.8
minikube-m03 Ready <none> 3m48s v1.22.8
minikube-m04 Ready <none> 3m33s v1.22.8
minikube-m05 Ready <none> 3m17s v1.22.8
minikube-m06 Ready <none> 3m2s v1.22.8
minikube-m07 Ready <none> 2m46s v1.22.8
When using Altinity.Cloud Anywhere with a traditional cloud vendor, there are node types, availability zones, and storage classes. We need to label our minikube nodes to simulate those things. First, run these commands to define the node types and availability zones:
kubectl --context=minikube label nodes minikube \
node.kubernetes.io/instance-type=minikube-node \
topology.kubernetes.io/zone=minikube-zone-a
kubectl --context=minikube label nodes minikube-m02 \
node.kubernetes.io/instance-type=minikube-node \
topology.kubernetes.io/zone=minikube-zone-a
kubectl --context=minikube label nodes minikube-m03 \
node.kubernetes.io/instance-type=minikube-node \
topology.kubernetes.io/zone=minikube-zone-a
kubectl --context=minikube label nodes minikube-m04 \
node.kubernetes.io/instance-type=minikube-node \
topology.kubernetes.io/zone=minikube-zone-b
kubectl --context=minikube label nodes minikube-m05 \
node.kubernetes.io/instance-type=minikube-node \
topology.kubernetes.io/zone=minikube-zone-b
kubectl --context=minikube label nodes minikube-m06 \
node.kubernetes.io/instance-type=minikube-node \
topology.kubernetes.io/zone=minikube-zone-c
kubectl --context=minikube label nodes minikube-m07 \
node.kubernetes.io/instance-type=minikube-node \
topology.kubernetes.io/zone=minikube-zone-c
Now all of our minikube nodes are defined to be of type minikube-node
;
we’ll see that node type again later. We’ve also defined availability
zones named minikube-zone-a
, minikube-zone-b
, and minikube-zone-c
.
On to our
storage classes. We want to use the local-path
storage class instead
of minikube’s default standard
storage class. This command defines
the new storage class:
curl -sSL https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.22/deploy/local-path-storage.yaml | \
sed 's/\/opt\/local-path-provisioner/\/var\/opt\/local-path-provisioner/ ' | \
kubectl --context=minikube apply -f -
Now that we’ve defined the new storage class, we need to tell minikube that
the local-path
class is the default:
kubectl --context=minikube patch storageclass standard \
-p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
kubectl --context=minikube patch storageclass local-path \
-p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Running kubectl get storageclasses
shows the new default class:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 36s
standard k8s.io/minikube-hostpath Delete Immediate false 15m
Connecting Altinity.Cloud Anywhere to minikube
Now that we have the minikube cluster running and configured, it’s time to connect it to Altinity.Cloud Anywhere. That’s the final step for enabling Altinity to provision ClickHouse clusters in minikube.
Step 1. Setting up the tunnel
First we need to set up the TLS tunnel between minikube and Altinity. Click the emailed link you got when you signed up for an Altinity.Cloud Anywhere account. You’ll see this screen:
Make sure the “Provisioned by User” box is selected at the top of the page, and
make sure you’ve installed altinitycloud-connect
from the link beneath it.
Copy and paste the text in the center box at the command line and run it. This
doesn’t generate any output at the command line, but it does create a
cloud-connect.pem
file in the current directory.
Now that you have the cloud-connect.pem
file, run the following command
to set up the TLS tunnel:
altinitycloud-connect kubernetes --url=https://anywhere.altinity.cloud --release=latest-master | kubectl --context=minikube apply -f -
Note: The command you run is different from the one in the text box at the
bottom of Figure 1.
Make sure that the --url
parameter matches the URL in that text box,
as it is dependent on the Altinity.Cloud Anywhere
endpoint you’re using.
The altinitycloud-connect kubernetes
command generates YAML that has configuration
information along with the keys from the .pem
file. That YAML data is
passed to kubectl
.
You’ll see results similar to this:
namespace/altinity-cloud-system created
namespace/altinity-cloud-managed-clickhouse created
clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-view created
clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view created
clusterrole.rbac.authorization.k8s.io/altinity-cloud:storage-class-view created
clusterrole.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view created
clusterrole.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
serviceaccount/cloud-connect created
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-view created
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view created
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:storage-class-view created
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view created
rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
secret/cloud-connect created
deployment.apps/cloud-connect created
Another note: Altinity creates all ClickHouse-related
assets in the altinity-cloud-system
and altinity-cloud-managed-clickhouse
namespaces. You should not create anything in those namespaces yourself.
Click Proceed to go to the next step.
Step 2. Configuring your minikube resources
Next we’ll define aspects of the minikube environment to Altinity. Use the values highlighted in red in Figure 2.
The specific values to use are:
- Cloud Provider: Not Specified (minikube is a special case)
- Region:
minikube-zone
(we defined that with a label earlier) - Number of AZs:
3
- Storage Classes:
local-path
(defined as the defaultstorageclass
earlier) - Node Pools: A single node pool named
minikube-node
with a capacity of2
. The boxes for ClickHouse and Zookeeper must be checked.
Click Proceed to go to the Confirmation screen.
Step 3. Confirming your choices
A JSON description of all of your choices appears in the text box at the top of Figure 3:
You can edit the JSON as needed;
currently you need to change the names of the availability zones. Using
the values specified in the previous step, the generated availability zones will be
minikube-zonea
, minikube-zoneb
, and minikube-zonec
. They should be
minikube-zone-a
, minikube-zone-b
, and minikube-zone-c
.
Once everything is correct, click Finish. This begins the process of creating a ClickHouse cluster inside minikube. You’ll see a status bar similar to Figure 4:
It’s quite likely the status bar will reach the end before everything is configured. Just keep clicking Finish until things are, well, finished:
When things are finished, you’ll see this screen:
With everything up and running, kubectl get pods -n altinity-cloud-managed-clickhouse
shows the pods Altinity.Cloud Anywhere created:
NAME READY STATUS RESTARTS AGE
chi-minikube-ch1-minikube-ch1-0-0-0 2/2 Running 3 (3m12s ago) 4m54s
chi-minikube-ch1-minikube-ch1-0-1-0 2/2 Running 3 (3m23s ago) 4m42s
clickhouse-operator-85c8855c56-qn98x 2/2 Running 0 4m31s
zookeeper-1638-0 1/1 Running 0 4m43s
zookeeper-1638-1 1/1 Running 0 2m54s
zookeeper-1638-2 1/1 Running 0 4m56s
There are two pods for ClickHouse itself, a pod for the Altinity ClickHouse Operator, and three pods for Zookeeper. These pods are managed for you by Altinity.
Working with Altinity.Cloud Anywhere
Now that your environment is configured, you use the Altinity Cloud Manager (ACM) to perform common user and administrative tasks. The steps and tools to manage your ClickHouse clusters are the same for Altinity.Cloud Anywhere and Altinity.Cloud.
Here are some common tasks from the ACM documentation:
- Launching a new ClickHouse cluster
- Running an SQL query
- Rescaling a ClickHouse cluster
- Starting or stopping a ClickHouse cluster
The ACM documentation includes:
- A Quick Start guide,
- A General User Guide, and
- An Administrator Guide.
At the command line you can also connect to a running pod and work with ClickHouse directly.