Setting up EKS with Terraform for Altinity.Cloud
The Altinity eks-clickhouse Terraform module makes it easy to set up an EKS Kubernetes cluster for a Bring Your Own Kubernetes (BYOK) environment.
Prerequisites
- Terraform version >= 1.5
- kubectl
- AWS command line interface version 2.0 of higher
- Authentication with your AWS account
To authenticate with your AWS account, set the environment variables AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
, and AWS_SESSION_TOKEN
. You must also ensure your AWS account has sufficient permissions for EKS and related services.
Terraform module for BYOK on EKS
The module makes it easy to spin up an AWS EKS cluster optimized for working with Altinity.Cloud. This configuration is tailored for the best performance of ClickHouse®, following Altinity’s best practices and recommended specs for AWS:
- Instance Types
- Node Labels
- EBS Controller with custom Storage Class (
gp3-encrypted
) - Cluster Autoscaler with multi-zones High Availability
See the module’s repo for detailed information about the module's architecture.
Setting up the EKS cluster and node groups
- Create a new directory for your Terraform project and switch to that directory.
- Go to the eks-clickhouse Terraform module page and make sure you’re using the latest version. Check the dropdown menu at the top of the page:
Figure 1 - Working with the latest version of the eks-clickhouse
Terraform provider
- Scroll down to the Usage section of the page for the sample script. Copy and paste the code into a file named
main.tf
in the directory you created earlier. Modify the code for your needs:- At a minimum you’ll need to change the
eks_cluster_name
. It must be unique across your AWS account. - The
region
for the availability zones. See the note below for important details on how availability zone names are created install_clickhouse_cluster
- create a ClickHouse cluster in addition to installing the ClickHouse operator. The default totrue
.clickhouse_cluster_enable_loadbalancer
- create a public LoadBalancer. The default isfalse
.
- At a minimum you’ll need to change the
NOTE: The Terraform script generates availability zone names for you. If the value of region
is us-east-1
, the availability zone names will be us-east-1a
, us-east-1b
, and us-east-1c
. Be aware this may not work for every region. For example, as of this writing, the availability zones for ca-central-1
are ca-central-1a
, ca-central-1b
, and ca-central-1d
. Specifying an availability zone of ca-central-1c
is a fatal error. Check the AWS regions and availability zones documentation to see the correct values for your region. If needed, modify the script in both the eks_availability_zones
section and the zones
spec in the eks_node_pools
section.
Visit the documentation for the Altinity eks-clickhouse Terraform provider if you need more details. If you need further customization, you can always fork the Terraform module and create your own personalized version of it.
Again, remember to authenticate with your AWS account before going forward.
Applying the configuration
Open the terminal, navigate into the created directory and run these commands to initialize the Terraform project and apply it:
# initialize the terraform project
terraform init
# apply module changes
# btw, did you remember to authenticate with your AWS account?
terraform apply
This operation will take several minutes to complete. When it completes, you’ll have a running AWS EKS cluster with high availability and other features.
Verifying your EKS cluster
- Update your
kubeconfig
with the new AWS EKS cluster data using the following command:
aws eks update-kubeconfig --region us-east-1 --name clickhouse-cluster
- List your AWS EKS cluster nodes:
kubectl get nodes
You should see something like this:
NAME STATUS ROLES AGE VERSION
ip-10-0-1-174.ec2.internal Ready <none> 6m17s v1.28.5-eks-5e0fdde
ip-10-0-1-35.ec2.internal Ready <none> 6m55s v1.28.5-eks-5e0fdde
ip-10-0-2-181.ec2.internal Ready <none> 6m35s v1.28.5-eks-5e0fdde
ip-10-0-2-63.ec2.internal Ready <none> 6m38s v1.28.5-eks-5e0fdde
ip-10-0-3-128.ec2.internal Ready <none> 6m37s v1.28.5-eks-5e0fdde
ip-10-0-3-164.ec2.internal Ready <none> 6m36s v1.28.5-eks-5e0fdde
The number of nodes may vary depending on how you modified the Terraform script.
- List the
kube-system
pods:
kubectl get pods -n kube-system
All these pods are created by default under the kube-system
namespace. You’ll see something like this:
NAME READY STATUS RESTARTS AGE
altinity-clickhouse-operator-ccd67cb44-5qf4s 2/2 Running 0 28m
aws-node-5cgfr 2/2 Running 0 30m
cluster-autoscaler-aws-cluster-autoscaler-58758fbbdf-whq7j 1/1 Running 0 29m
coredns-6b9575c64c-6s2qh 1/1 Running 0 35m
coredns-6b9575c64c-s9khw 1/1 Running 0 35m
ebs-csi-controller-85979cbff5-jj2fc 6/6 Running 0 29m
ebs-csi-controller-85979cbff5-mqbmm 6/6 Running 0 29m
ebs-csi-node-kncqj 3/3 Running 0 29m
kube-proxy-h5vsx 1/1 Running 0 30m
Your AWS EKS cluster is now ready. Remember that the given configuration is just a starting point. Before using this in production, you should review the module documentation and ensure it fits your security needs.
Connecting your new environment to Altinity.Cloud
The final step is to connect your new EKS cluster to the Altinity Cloud Manager (ACM). In a nutshell, you need to create an Altinity.Cloud environment and connect it to your new Kubernetes cluster. See the section Connecting Your Kubernetes Environment to Altinity.Cloud for all the details. (Note: The example used in this link connects to an Azure AKS instance, but the procedure is the same for AWS.)
Deleting the configuration
When you no longer need your EKS cluster, the ClickHouse clusters it hosts, and the Altinity.Cloud environment that manages them, there are two straightforward steps:
- Delete all of your ClickHouse clusters and your Altinity.Cloud environment. Simply delete the environment and select the “Delete clusters” option.
- Run
terraform destroy
to clean up the EKS cluster and all of its resources. When this command finishes, all of the resources associated with your EKS environment are gone.