Setting Up EKS with Terraform for Altinity.Cloud

The easiest way to configure an EKS cluster

The Altinity Terraform module for EKS makes it easy to set up an EKS Kubernetes cluster for a Bring Your Own Kuberentes (BYOK) environment.

Prerequisites

  • Terraform version >= 1.5
  • AWS CLI
  • kubectl

NOTE: Ensure your AWS account has sufficient permissions for EKS and related services and that you have previously authenticated using the AWS CLI.

Terraform Module for BYOK on EKS

We created the Terraform EKS module to make it easy to spin up an AWS EKS cluster optimized for working with Altinity.Cloud Anywhere. This configuration is tailored for the best performance of ClickHouse, following Altinity’s best practices and recommended specs for AWS:

  • Instance Types
  • Node Labels
  • EBS Controller with custom Storage Class (gp3-encrypted)
  • Cluster Autoscaler with multi-zones High Availability

You can view the module’s documentation for detailed information about the module and its architecture.

EKS Cluster and Node Groups Setup

  1. Create a new directory for your terraform project and create a file named main.tf .
  2. Copy and paste the following code into the main.tf file
locals {
  region = "us-east-1"
}

module "eks_clickhouse" {
  source  = "github.com/Altinity/terraform-aws-eks-clickhouse"

  # We don't need to install the operator and the cluster, ACM will take care of this task.
  install_clickhouse_operator = false
  install_clickhouse_cluster  = false

  eks_cluster_name = "clickhouse-cluster"
  eks_region       = local.region
  eks_cidr         = "10.0.0.0/16"

  eks_availability_zones = [
    "${local.region}a",
    "${local.region}b",
    "${local.region}c"
  ]
  eks_private_cidr = [
    "10.0.1.0/24",
    "10.0.2.0/24",
    "10.0.3.0/24"
  ]
  eks_public_cidr = [
    "10.0.101.0/24",
    "10.0.102.0/24",
    "10.0.103.0/24"
  ]
  eks_node_pools_config = {
    scaling_config = {
      desired_size = 2
      max_size     = 10
      min_size     = 0
    }

    disk_size      = 20
    instance_types = ["m6i.large", "t3.large"]
  }

  eks_tags = {
    CreatedBy = "mr-robot"
  }
}

This configuration will create 6 different node groups following the combination of eks_availability_zones and instance_types.

Be sure to tweak configuration values like CIDR, availability zones, and instance types for your requirements. If you need further customization, you can always fork the Terraform module and create your personalized version of it.

Applying the Configuration

Open the terminal, navigate into the created directory and run these commands to initialize the Terraform project and apply it:

# initializes terraform project
> terraform init

# apply module changes
> terraform apply

This operation will take several minutes to complete. When it completes, you’ll have a running AWS EKS cluster with high availability and other features.

Verify your cluster

  1. Update your kubeconfig with the new AWS EKS cluster data using:
aws eks update-kubeconfig --region us-east-1 --name clickhouse-cluster
  1. List your AWS EKS cluster nodes:
kubectl get nodes

You should see six nodes:

NAME                         STATUS   ROLES    AGE     VERSION
ip-10-0-1-174.ec2.internal   Ready    <none>   6m17s   v1.28.5-eks-5e0fdde
ip-10-0-1-35.ec2.internal    Ready    <none>   6m55s   v1.28.5-eks-5e0fdde
ip-10-0-2-181.ec2.internal   Ready    <none>   6m35s   v1.28.5-eks-5e0fdde
ip-10-0-2-63.ec2.internal    Ready    <none>   6m38s   v1.28.5-eks-5e0fdde
ip-10-0-3-128.ec2.internal   Ready    <none>   6m37s   v1.28.5-eks-5e0fdde
ip-10-0-3-164.ec2.internal   Ready    <none>   6m36s   v1.28.5-eks-5e0fdde
  1. List the kube-system pods:
kubectl get pods -n kube-system

All these pods are created by default under the kube-system namespace. You’ll see something like this:

NAME                                                        READY   STATUS    RESTARTS   AGE
aws-node-gw24w                                              2/2     Running   0          6m12s
aws-node-j9sr6                                              2/2     Running   0          6m14s
aws-node-kwdgq                                              2/2     Running   0          6m13s
aws-node-spdtd                                              2/2     Running   0          6m32s
aws-node-stnjm                                              2/2     Running   0          6m15s
aws-node-wq528                                              2/2     Running   0          5m54s
cluster-autoscaler-aws-cluster-autoscaler-686c7f4f5-bhw9n   1/1     Running   0          4m34s
coredns-86969bccb4-dpfb4                                    1/1     Running   0          10m
coredns-86969bccb4-mlpt6                                    1/1     Running   0          10m
ebs-csi-controller-68ff8856fc-ffc4s                         6/6     Running   0          4m44s
ebs-csi-controller-68ff8856fc-jqw2k                         6/6     Running   0          4m44s
ebs-csi-node-62h5k                                          3/3     Running   0          4m44s
ebs-csi-node-69924                                          3/3     Running   0          4m44s
ebs-csi-node-9snbj                                          3/3     Running   0          4m44s
ebs-csi-node-qgdtk                                          3/3     Running   0          4m44s
ebs-csi-node-r5lq9                                          3/3     Running   0          4m44s
ebs-csi-node-sjqdw                                          3/3     Running   0          4m44s
kube-proxy-2ntw7                                            1/1     Running   0          6m15s
kube-proxy-9q4rs                                            1/1     Running   0          5m54s
kube-proxy-kv76v                                            1/1     Running   0          6m32s
kube-proxy-lkmp4                                            1/1     Running   0          6m12s
kube-proxy-pzwht                                            1/1     Running   0          6m13s
kube-proxy-sfz7z                                            1/1     Running   0          6m14s

Your AWS EKS cluster is now ready, and you are able to connect it to the Altinity Cloud Manager (ACM). Remember that the given configuration is just a starting point. Before using this in production, you should review the module documentation and ensure it fits your security needs.

To connect your new Kubernetes cluster to the ACM, see the section Connecting to Altinity.Cloud Anywhere.