Setting up EKS with Terraform for Altinity.Cloud
The Altinity Terraform module for EKS makes it easy to set up an EKS Kubernetes cluster for a Bring Your Own Kubernetes (BYOK) environment.
Prerequisites
- Terraform version >= 1.5
- kubectl
- AWS command line interface version 2.0 of higher
- Authentication with your AWS account
To authenticate with your AWS account, set the environment variables AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
, and AWS_SESSION_TOKEN
. You must also ensure your AWS account has sufficient permissions for EKS and related services.
Terraform module for BYOK on EKS
We created the Terraform EKS module to make it easy to spin up an AWS EKS cluster optimized for working with Altinity.Cloud Anywhere. This configuration is tailored for the best performance of ClickHouse®, following Altinity’s best practices and recommended specs for AWS:
- Instance Types
- Node Labels
- EBS Controller with custom Storage Class (
gp3-encrypted
) - Cluster Autoscaler with multi-zones High Availability
See the module’s repo for detailed information about the module's architecture.
Setting up the EKS cluster and node groups
- Create a new directory for your terraform project and create a file named
main.tf
. - Copy and paste the following code into the
main.tf
file:
locals {
env_name = "acme-staging"
region = "us-east-1"
zones = ["${local.region}a", "${local.region}b", "${local.region}c"]
clickhouse_instance_type = "m6i.large"
system_instance_type = "t3.large"
altinity_labels = { "altinity.cloud/use" = "anywhere" }
}
provider "aws" {
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs
region = local.region
}
provider "kubernetes" {
# https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs
host = module.eks_clickhouse.eks_cluster_endpoint
cluster_ca_certificate = base64decode(module.eks_clickhouse.eks_cluster_ca_certificate)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
args = [
"eks",
"get-token",
"--cluster-name",
local.env_name,
"--region",
local.region
]
command = "aws"
}
}
module "eks_clickhouse" {
source = "github.com/Altinity/terraform-aws-eks-clickhouse"
install_clickhouse_operator = false
install_clickhouse_cluster = false
eks_cluster_name = local.env_name
eks_region = local.region
eks_cidr = "10.0.0.0/16"
eks_availability_zones = local.zones
eks_private_cidr = [
"10.0.1.0/24",
"10.0.2.0/24",
"10.0.3.0/24"
]
eks_public_cidr = [
"10.0.101.0/24",
"10.0.102.0/24",
"10.0.103.0/24"
]
eks_node_pools = [
{
name = "clickhouse"
instance_type = local.clickhouse_instance_type
desired_size = 0
max_size = 10
min_size = 0
zones = local.zones
labels = local.altinity_labels
},
{
name = "system"
instance_type = local.system_instance_type
desired_size = 0
max_size = 10
min_size = 0
zones = local.zones
labels = local.altinity_labels
}
]
eks_tags = {
CreatedBy = "mr-robot"
}
}
module "altinitycloud_connect" {
source = "altinity/connect/altinitycloud"
pem = altinitycloud_env_certificate.this.pem
// "depends_on" is here to enforce "this module, then module.eks_clickhouse" order on destroy.
depends_on = [module.eks_clickhouse]
}
resource "altinitycloud_env_certificate" "this" {
env_name = local.env_name
}
resource "altinitycloud_env_k8s" "this" {
name = altinitycloud_env_certificate.this.env_name
distribution = "EKS"
node_groups = [
{
name = local.clickhouse_instance_type,
node_type = local.clickhouse_instance_type,
capacity_per_zone = 10,
reservations = ["CLICKHOUSE"],
zones = local.zones
tolerations = [
{
key = "dedicated"
value = "clickhouse"
effect = "NO_SCHEDULE"
operator = "EQUAL"
}
]
},
{
name = local.system_instance_type,
node_type = local.system_instance_type,
capacity_per_zone = 10,
reservations = ["SYSTEM", "ZOOKEEPER"],
zones = local.zones
}
]
load_balancers = {
public = {
enabled = true
}
}
// "depends_on" is here to enforce "this resource, then module.altinitycloud_connect" order on destroy.
depends_on = [module.altinitycloud_connect]
}
This configuration will create 6 different node groups using the combination of eks_availability_zones
and instance_types
.
Be sure to tweak configuration values like eks_cluster_name
, instance_types
, CIDR information, region
(for the availability zones), and whatever else you need for your requirements. If you need further customization, you can always fork the Terraform module and create your personalized version of it.
NOTE: The value for region
is required. If region
is us-east-1
, the script creates the availability zone names us-east-1a
, us-east-1b
, and us-east-1c
. Be aware this may not work for every region. For example, as of this writing, the availability zones for ca-central-1
are ca-central-1a
, ca-central-1b
, and ca-central-1d
. Specifying an availability zone of ca-central-1c
is a fatal error. Check the AWS regions and availability zones documentation to see the correct values for your region and modify the script as needed in both the ekc_availability_zones
section and the zones
spec in the eks_node_pools
section.
Again, remember to authenticate with your AWS account before going forward.
Applying the configuration
Open the terminal, navigate into the created directory and run these commands to initialize the Terraform project and apply it:
# initialize the terraform project
terraform init
# apply module changes
# btw, did you remember to authenticate with your AWS account?
terraform apply
This operation will take several minutes to complete. When it completes, you’ll have a running AWS EKS cluster with high availability and other features.
Verifying your EKS cluster
- Update your
kubeconfig
with the new AWS EKS cluster data using the following command:
aws eks update-kubeconfig --region us-east-1 --name clickhouse-cluster
- List your AWS EKS cluster nodes:
kubectl get nodes
You should see six nodes:
NAME STATUS ROLES AGE VERSION
ip-10-0-1-174.ec2.internal Ready <none> 6m17s v1.28.5-eks-5e0fdde
ip-10-0-1-35.ec2.internal Ready <none> 6m55s v1.28.5-eks-5e0fdde
ip-10-0-2-181.ec2.internal Ready <none> 6m35s v1.28.5-eks-5e0fdde
ip-10-0-2-63.ec2.internal Ready <none> 6m38s v1.28.5-eks-5e0fdde
ip-10-0-3-128.ec2.internal Ready <none> 6m37s v1.28.5-eks-5e0fdde
ip-10-0-3-164.ec2.internal Ready <none> 6m36s v1.28.5-eks-5e0fdde
- List the
kube-system
pods:
kubectl get pods -n kube-system
All these pods are created by default under the kube-system
namespace. You’ll see something like this:
NAME READY STATUS RESTARTS AGE
aws-node-7ll4k 2/2 Running 0 4m7s
cluster-autoscaler-aws-cluster-autoscaler-56c7fdf75c-nlb6l 1/1 Running 0 3m2s
coredns-68bd859788-m9zbn 1/1 Running 0 7m33s
coredns-68bd859788-q778b 1/1 Running 0 7m33s
ebs-csi-controller-6d76764dcd-bgzbv 6/6 Running 0 3m1s
ebs-csi-controller-6d76764dcd-f7vx5 6/6 Running 0 3m2s
ebs-csi-node-vb8xf 3/3 Running 0 3m2s
kube-proxy-msws4 1/1 Running 0 4m7s
Your AWS EKS cluster is now ready. Remember that the given configuration is just a starting point. Before using this in production, you should review the module documentation and ensure it fits your security needs.
Connecting your new environment to Altinity.Cloud
The final step is to connect your new EKS cluster to the Altinity Cloud Manager (ACM). In a nutshell, you need to create an Altinity.Cloud environment and connect it to your new Kubernetes cluster. See the section Connecting to Altinity.Cloud Anywhere for all the details. (Note: The example used in this link connects to an Azure AKS instance, but the procedure is the same for AWS.)
Deleting the configuration
When you no longer need your EKS cluster, the ClickHouse clusters it hosts, and the Altinity.Cloud environment that manages them, there are two straightforward steps:
- Delete all of your ClickHouse clusters and your Altinity.Cloud environment. Simply delete the environment and select the “Delete clusters” option.
- Run
terraform destroy
to clean up the EKS cluster and all of its resources. When this command finishes, all of the resources associated with your EKS environment are gone.