This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Altinity.Cloud Anywhere

Manuals, quick start guides, code samples and tutorials on how to use Altinity.Cloud Anywhere.

Altinity.Cloud Anywhere is a zero-maintenance, open source-based SaaS for ClickHouse that gives you control of your data, letting you chart your own path and giving you choice as to working with vendors or running your infrastructure yourself.

Your data. Your control. Our tools.

1 - Altinity.Cloud Anywhere 101

What is Altinity.Cloud Anywhere?

17 May 2023 · Read time 4 min

Introduction and Benefits

Altinity.Cloud Anywhere provides the convenient cloud management of Altinity.Cloud but allows users to keep data within their own cloud VPCs and private data centers, and run managed ClickHouse in their own Kubernetes clusters. We call these clusters Altinity.Cloud Anywhere environments.

Altinity.Cloud Anywhere offers several important benefits for users.

  • Compliance - Retain full control of data (including backups) as well as the operating environment and impose your policies for security and privacy.
  • Cost - Optimize infrastructure costs by running in your accounts.
  • Location - Place ClickHouse clusters close to data sources and applications.
  • Vendor Unlocking - Disconnect at any time and continue to operate ClickHouse using open-source components.

The rest of this document explains concepts that help users understand Altinity.Cloud Anywhere and maximize benefits.

The Altinity.Cloud Manager UI manages Altinity.Cloud Anywhere environments are just like fully hosted Altinity.Cloud environments. Users can control multiple environments from the same Altinity.Cloud account and can mix/match environment types. ClickHouse management operations are identical in all environments.

Service Architecture

The Altinity.Cloud service architecture consists of a shared management plane that serves as a single point of management for all tenants and a data plane that consists of isolated environments for each tenant. The following diagram shows the service architecture and data plane relationships.

Altinity.Cloud Management Plane
Figure 1 - Service Architecture.


Each environment is a dedicated Kubernetes cluster. In the case of Altinity.Cloud environments, Kubernetes clusters run on Altinity’s cloud accounts and are completely hidden from users. In the Altinity.Cloud Anywhere case, Kubernetes clusters run in the user’s cloud account or data center.

For example, the user may run an EKS cluster within a VPC belonging to the user’s AWS cloud account.

Altinity.Cloud Anywhere environments can also use on-prem Kubernetes clusters. They can even use development versions of Kubernetes running on a user’s PC or laptop.

Open Source Analytic Stack

Altinity.Cloud Anywhere uses open-source software for the analytic stack and selected management services–the Altinity Operator for ClickHouse, Prometheus, and Grafana. The following diagram shows how the principal components map to resources in AWS. (GCP is essentially identical.) Open-source components are marked in orange.

Altinity.Cloud Management Plane
Figure 2 - Management and observability.


Users can terminate the service, and disconnect the Altinity.Cloud Anywhere environment from Altinity.Cloud, and run ClickHouse services themselves. There is no migration, since all data, software, and support services are already in the user Kubernetes cluster.

Altinity.Cloud Anywhere Connectivity Model

Altinity.Cloud Anywhere environments use the Altinity Connector to establish a management connection from the user Kubernetes cluster to Altinity.Cloud. The Altinity Connector establishes an outbound HTTPS connection to a management endpoint secured by certificates. This allows management commands and monitoring data to move securely between locations.

Users connect an Altinity.Cloud Anywhere environment to Altinity.Cloud in three simple steps.

  1. Download the Altinity Connector executable program (altinitycloud-connect).
  2. Run and register Altinity Connector with Altinity.Cloud Manager.
    • If Altinity Connector is installed on a separate VM, it may run provisioning of the Kubernetes cluster (EKS, GKE, AKS). This process deploys a new instance of Altinity Connector into the provisioned Kubernetes cluster as well.
    • When Altinity Connector is installed directly in Kubernetes, it runs the provisioning of Kubernetes resources.
  3. Complete registration in the Altinity.Cloud Manager.

Altinity.Cloud Anywhere environments run all services in two namespaces.

  • The altinity-cloud-system namespace contains system services including the Altinity Connector.
  • The altinity-cloud-managed-clickhouse namespace contains ClickHouse and ZooKeeper. Users can run services in other namespaces provided they do not make changes to the Altinity-managed namespaces.

See the Quickstart page for steps to register an Altinity.Cloud Anywhere environment.

Kubernetes Cluster Preparation for Use

Kubernetes clusters must meet a small number of requirements to serve as an Altinity.Cloud Anywhere environment for production use.

  • Configure storage classes that can allocate block storage on-demand, for example using the AWS EBS CSI driver.
  • Enable auto-provisioning, e.g., node groups or Karpenter. This allows Altinity.Cloud to expand or contract clusters as well as rescale server pods efficiently.
  • Kubernetes pods must be able to connect to S3-compatible object storage or GCS (Google Cloud Storage). Object storage is used for backups.

These requirements can be relaxed for non-production environments, such as Minikube. Check the Kubernetes Requirements page for more recommendations on specific Kubernetes distributions.

Shared Administration between Altinity.Cloud and User

In Altinity.Cloud Anywhere environments the responsibility for administration is shared between Altinity and users. The following table shows major system components.

Table 1 - Administratirve Responsibility
Table 1 - Altinity.Cloud Anywhere Environment - Administrative Responsibility.

Altinity is developing a new model called Altinity.Cloud Anywhere Plus. It will shift responsibility for Kubernetes and VPC management to Altinity. Contact Altinity Support for more information on this model.

2 - Altinity.Cloud Anywhere Quickstart

How to use Altinity.Cloud Anywhere to connect to your on-prem or 3rd-party ClickHouse host environment.

Overview - Quickstart

This tutorial explains how to use Altinity.Cloud Anywhere to deploy ClickHouse clusters using your choice of a third-party Kubernetes cloud provider, or using your own hardware or private company cloud. The Altinity.Cloud Manager (ACM) is used to manage your ClickHouse clusters.


More Information

If you encounter difficulties with any part of the tutorial, check the Troubleshooting section. Contact Altinity support for additional help if the troubleshooting advice does not resolve the problem.

Prerequisites

Preparing Kubernetes

Altinity.Cloud Anywhere supports the following Kubernetes environments:

  • Amazon EKS
  • Google GKS (the example provider used on this Quickstart page)
  • Azure AKS
  • DigitalOcean Kubernetes
  • Red Hat OpenShift
  • SUSE Rancher
  • Other (example: Minikube)

The following guidelines are provided to help you create your Kubernetes cluster.

Minikube

For non-production use, a Minikube-based tutorial is provided to show how to use an Altinity.Cloud Anywhere deployment on a home computer. This is a 20-minute read that includes creating a new database and adding tables and data using the ACM.

Free Trial Account

Get your Altinity.Cloud Anywhere free trial account from the following link and fill in the information:

Signup Page
Figure 1 - The Altinity.Cloud Anywhere Free Trial signup page that shows Google GKS selected for the Kubernetes type.


Submitting the Free Trial form

  1. Fill in the form and select your Kubernetes option (Example: Google GKS).
    • NOTE: Public email domains such as Gmail or Hotmail are not allowed; you must use a company domain.
  2. From the first Altinity Email you receive after clicking SUBMIT, follow the instructions in the signup process to validate your email. This will notify Altinity technical support to provision your new account.
  3. The next email you will receive after Altinity completes your account setup. It contains a link to log in to Altinity.Cloud, where you will create a password to log in to the Altinity Cloud Manager (ACM).

Now you are ready to connect your Kubernetes cluster.

Connecting Kubernetes

The first time you log in, you will be directed to the environment setup page shown in Figure 3. If you have an existing account or restart the installation, just select the Environments tab on the left side of your screen to reach the setup page.

Environment - Connection Setup Tab
Figure 2 - Environments > Connection Setup tab in the Altinity.Cloud Manager.


Connection Setup

Highlighted in red in Figure 3 are the steps to complete before you select the PROCEED button.

  1. In the first step labeled Altinity.Cloud connect, download the correct binary for your system.

  2. In step 2 Connect to Altinity.Cloud, copy and paste the connection string to your terminal. Note that there is no output, so the command prompt is immediately ready for the next command.

    altinitycloud-connect login --token=<registration token>
    
  3. Run the Deploy connector to your Kubernetes cluster command.

    altinitycloud-connect kubernetes | kubectl apply -f -
    

    This step takes several minutes to complete depending on the speed of your host system.

    The response displays as follows:

    namespace/altinity-cloud-system created
    namespace/altinity-cloud-managed-clickhouse created
    clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-view unchanged
    clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view unchanged
    clusterrole.rbac.authorization.k8s.io/altinity-cloud:storage-class-view unchanged
    clusterrole.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view unchanged
    clusterrole.rbac.authorization.k8s.io/altinity-cloud:cloud-connect unchanged
    serviceaccount/cloud-connect created
    clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect unchanged
    clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-view unchanged
    clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view unchanged
    clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:storage-class-view unchanged
    clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view unchanged
    rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
    rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
    secret/cloud-connect created
    deployment.apps/cloud-connect created
    

Note: In order to display Kubernetes roles and resources before applying run the following command.

altinitycloud-connect kubernetes

Resources Configuration

Once these commands have completed select the PROCEED button. After the connection is made, you will advance to the next Resources Configuration screen.

At the Resources Configuration screen, set the resources used for ClickHouse clusters as follows.

  1. Select your Kubernetes provider using the Cloud Provider radio button (Example: GCP).
  2. Add Storage Classes names, which are the block storage for your nodes. Use the ADD STORAGE CLASS button to add additional storage classes as needed to allocate block storage for nodes in your environment.
  3. In the Node Pools section, Inspect the node pool list to ensure availability zones and pools you wish to use are listed.
    • Note that the Used For column must have at least one selection of ClickHouse, Zookeeper, System.
    • Listed are the Availability Zones that are currently in use. If you see zones that are missing, add them using the ADD NODE POOL.
    • The ACM Availability Zones UI path is: Environments > clustername > ACTIONS > Edit > Container Options tab.

The following Resources Configuration example shows the red boxes around the settings made for the Google Cloud Platform GKE environment.

Resources Configuration Tab
Figure 3 - The Resources Configuration setup page for connecting cloudv2-gcp to Altinity.Cloud.


  1. The Cloud Provider is set to GCP.
  2. The Storage Classes uses the ADD STORAGE CLASS button to add the following: premium-rwo standard standard-two
  3. The Node Pools section uses the ADD NODE POOL button to add the Zone and Instance Type, storage Capacity in GB, and the Used For settings as follows:
    Zone       Instance Type   Capacity  Used for
    ---------  --------------  --------  ---------------------------------------------------
    us-east-b  e2-standard-2      10     [True] ClickHouse  [True] Zookeeper  [False] System
    us-east-a  e2-standard-2       3     [True] ClickHouse  [True] Zookeeper  [False] System
    

Confirmation of Settings

The Confirmation screen displays a JSON representation of the settings you just made. Review these settings then select FINISH.

Confirmation Tab
Figure 4 - Confirmation page showing the JSON version of the settings.

Connection Completed, Nodes Running

Once the connection is fully set up, the ACM Environments dashboard will display your new environment as shown in Figure 5 (example: cloudv2gpc).

Provisioned Environment Tab
Figure 5 - Environment dashboard page showing your running Anywhere cluster.

Creating your first ClickHouse cluster

To create your first cluster, switch to the Cluster page as indicated by the red keylines in Figure 6:

  • From the Environments page, select MANAGE CLUSTERS link located just below the blue banner.
  • Select Clusters from the left navigation panel.

The Cluster Launch Wizard document covers how to create a new cluster.

The result shown in Figure 6 is a ClickHouse cluster added to the Clusters dashboard.

Signup Page

Figure 6 - The result: The ACM displays a new ClickHouse cluster (Example cluster name: free-trial-any) deployed by Altinity.Cloud Anywhere.


Troubleshooting

Q-1. Altinity.Cloud Anywhere endpoint not reachable

Problem

  • The altinitycloud-connect command has a –url option that defaults to host anywhere.altinity.cloud on port 443. If this host is not reachable, the following error message appears.

    altinitycloud-connect login --token=<token>
    Error: Post "https://anywhere.altinity.cloud/sign":
       dial tcp: lookup anywhere.altinity.cloud on 127.0.0.53:53: no such host
    

Solution

  • Make sure the name is available in DNS and that the resolved IP address is reachable on port 443 (UDP and TCP), then try again.

  • Note: if you are using a non-production Altinity.Cloud environment you must specify the correct URL explicitly. Contact Altinity support for help.

Q-2. Insufficient Kubernetes privileges

Problem

  • Your Kubernetes account has insufficient permissions.

Solution

  • Set the following permissions for your Kubernetes account:

    • cluster-admin for initial provisioning only (it can be revoked afterwards)
    • Give full access to altinity-cloud-system and altinity-cloud-managed-clickhouse namespaces
    • A few optional read-only cluster-level permissions (for observability only)

Q-3. Help! I messed up the resource configuration

Problem

  • The resource configuration settings are not correct.

Solution

  1. From the Environment tab, in the Environment Name column, select the link to your environment.
  2. Select the menu function ACTIONS 》Reset Anywhere.
  3. Rerun the Environment 》Connection Setup and enter the correct values.

Q-4 One of my pods won’t spin up

When you reboot your Mac, the Anywhere cluster in your ACM has not started.

Problem

One of the pods won’t start. (Example: see line 3 edge-proxy-66d44f7465-lxjjn)

    ┌──────────────── Pods(altinity-cloud-system)[8] ──────────────────────────┐
    │ NAME↑                                PF READY RESTARTS STATUS            │
 1  │ cloud-connect-d6ff8499f-bkc5k        ●  1/1       3    Running           │
 2  │ crtd-665fd5cb85-wqkkk                ●  1/1       3    Running           │
 3  │ edge-proxy-66d44f7465-lxjjn          ●  1/2       7    CrashLoopBackOff  │
 4  │ grafana-5b466574d-4scjc              ●  1/1       1    Running           │
 5  │ kube-state-metrics-58d86c747c-7hj79  ●  1/1       6    Running           │
 6  │ node-exporter-762b5                  ●  1/1       3    Running           │
 7  │ prometheus-0                         ●  1/1       3    Running           │
 8  │ statuscheck-f7c9b4d98-2jlt6          ●  1/1       3    Running           │
    └──────────────────────────────────────────────────────────────────────────┘

Terminal listing 1 - The pod in Line 3 edge-proxy-66d44f7465-lxjjn won’t start.


Solution

Delete the pod using the kubectl delete pod command and it will regenerate. (Example: see line 3 edge-proxy-66d44f7465-lxjjn)

kubectl -n altinity-cloud-system delete pod edge-proxy-66d44f7465-lxjjn

3 - Kubernetes Requirements

Kubernetes Requirements.

Altinity.Cloud Anywhere operates inside user’s Kubernetes environment. Kubernetes can be provisioned by Altinity or provided by a user as described in the following section:


Kubernetes installation should satisfy the following criteria in order to work for Altinity.Cloud:

  • Nodes should be annotated with following annotations:
    • node.kubernetes.io/instance-type
    • kubernetes.io/arch
    • topology.kubernetes.io/zone
  • Storage class with dynamic provisioning is required.
  • LoadBalancer services must be supported

Following Kubernetes capabilities are preferrable in order to get the most from Altinity.Cloud features:

  • Storage class should allow volume expansion
  • Multiple zones are preferable for HA.
  • Autoscaling is preferable for easier vertical scaling.

See cloud specific requirements in the following sections:

3.1 - Recommendations for EKS (AWS)

Altinity.Cloud Anywhere recommendations for EKS (AWS)

20 March 2023 · Read time 1 min

We recommend setting up karpenter or cluster-autoscaler to launch instances in at least 3 Availability Zones.

If you plan on sharing Kubernetes cluster with other workloads, it’s recommended you label Kubernetes Nodes intended for Altinity.Cloud Anywhere with altinity.cloud/use=anywhere & taint with dedicated=anywhere:NoSchedule.

Instance Types

for Zookeeper infrastructure nodes

  • t3.large or t4g.large*

* t4g instances are AWS Graviton2-based (ARM).

for ClickHouse nodes

ClickHouse works the best in AWS when using nodes from those instance families:

  • m5
  • m6i
  • m6g*

* m6g instances are AWS Graviton2-based (ARM).

Instance sizes from large to 8xlarge are typical.

Storage Classes

  • gp2
  • gp2-encrypted
  • gp3*
  • gp3-encrypted*

* gp3 storage classes require Amazon EBS CSI driver that does not come pre-installed.

Example manifests:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2
provisioner: kubernetes.io/aws-ebs
parameters:
  fsType: ext4
  type: gp2
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2-encrypted
provisioner: kubernetes.io/aws-ebs
parameters:
  encrypted: 'true'
  fsType: ext4
  type: gp2
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp3
provisioner: ebs.csi.aws.com
parameters:
  fsType: ext4
  type: gp3
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp3-encrypted
  annotations:
    storageclass.kubernetes.io/is-default-class: 'true'
provisioner: ebs.csi.aws.com
parameters:
  encrypted: 'true'
  fsType: ext4
  type: gp3
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

Notes:

  • We do not recommend using gp2 storage classes. gp3 is better and less expensive
  • gp3 default throughput is 125MB/s for any volume size. It can be increased in AWS console or using storage class parameters. Here is an example:
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp3-encrypted-500
provisioner: ebs.csi.aws.com
parameters:
  encrypted: 'true'
  fsType: ext4
  throughput: '500'
  type: gp3
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

3.2 - Recommendations for GKE (GCP)

Altinity.Cloud Anywhere recommendations for GKE (GCP)

20 March 2023 · Read time 1 min

Machine Types

NOTE: Depending on machine types & number of instances you plan to use, you may need to request GCE quota increase.

From the table, filter on the Quota (example: persistent Disk SSD) and Dimensions (example: specify the region name us-west1) columns, select EDIT QUOTAS then change the Limit value (example: change 500 GB to 600 GB).

Property name filter example

  • Persistent Disk SSD (GB)
  • N2 CPUs
  • us-west1

Altinity recommends setting up each node pool except the default one in at least 3 zones.

If you plan on sharing Kubernetes cluster with other workloads, it’s recommended you label Kubernetes Nodes intended for Altinity.Cloud Anywhere with altinity.cloud/use=anywhere & taint with dedicated=anywhere:NoSchedule.

for Zookeeper and infrastructure nodes

  • e2-standard-2

for ClickHouse nodes

It’s recommended to taint node pools below with dedicated=clickhouse:NoSchedule (in addition to altinity.cloud/use=anywhere).

  • n2d-standard-2
  • n2d-standard-4
  • n2d-standard-8
  • n2d-standard-16
  • n2d-standard-32

If GCP is out of n2d-standard-* instances in the region of your choice, we recommend substituting them with n2-standard-*.

Storage Classes

  • standard-rwo
  • premium-rwo

GKE comes pre-configured with both.

4 - Kubernetes Installation

How to install Altinity.Cloud Anywhere on a Google Cloud Kubernetes environment (GKE).

End-to-end instructions that show you how to install Kubernetes clusters for Altinity.Cloud Anywhere on Amazon (EKS), Google (GKS) or Minikube running on Docker. This include instructions on how to use the Altinity Cloud Manager to create a ClickHouse cluster in your Altinity.Cloud Anywhere Kubernetes installation.

4.1 - AWS Remote Provisioning

Altinity.Cloud Anywhere operates inside user’s Kubernetes environment. Kubernetes can be provided by a user (see “Kubernetes Installation” section), or provisioned by Altinity.

10 May 2023 · Read time 6 min

Introduction

Altinity.Cloud Anywhere operates inside user’s Kubernetes environment.

Kubernetes can be provided by a user (see “Kubernetes Installation” section), or provisioned by Altinity.

Altinity technical support can remotely provision AWS EKS clusters with an Altinity.Cloud Anywhere environment on your Amazon account. Instructions on this page describe how to configure your EKS clusters to provide permission to Altinity to provision ClickHouse to your Amazon EKS Kubernetes environment. Shown in Figure 1 is a high level view that shows the Altinity.Cloud Kubernetes infrastructure.

Data
Figure 1 - Altinity.Cloud Kubernetes architecture, using Altinity Cloud Manager.


Summary of the Bootstrap Process

This section summarizes the bootstrap process so that you can use Altinity.Cloud to deploy a ClickHouse cluster to your AWS EKS environment.

  1. Follow the Altinity.Cloud Anywhere Quickstart.

  2. Provision an AWS EKS cluster using EC2 instance running with a user account.
    The EC2 instance is required in order to deploy altinitycloud-connect, which will establish an outbound connection to Altinity.Cloud and start the EKS provisioning process.
    The EC2 instance can be set up in two ways:

    • Automatically by using the AWS Cloud Formation Template to automate the process.
    • Manually set up by a user following Altinity documentation.
  3. Follow this document to complete the provisioning process.

  4. In the Altinity Cloud Manager, complete the configuration of EKS resources.



Automated Provisioning of the EKS using EC2 instance created from the AWS Cloud Formation Template

An Amazon AWS EC2 instance is required to deploy altinitycloud-connect, which will establish an outbound connection to Altinity.Cloud and start the EKS provisioning process.

In Altinity.Cloud

  1. Get an Altinity.Cloud account.

  2. Get an Altinity.Cloud Anywhere environment record.

  3. Get the connection token from Altinity Cloud Manager connection wizard.

AWS CloudFormation Stack
Figure 2 - AWS CloudFormation Stack.


  1. Go to the URL for Create stack Cloud Formation Stack as shown in Figure 2: NOTE: The URL will be different for other regions.
    Login to your AWS account then navigate to:
us-west-2.console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/create
  1. From the altinitycloud-connect release page, download the Cloud Formation YAML file.
altinitycloud-connect-<releasу-tag>.aws-cloudformation.yaml
  1. Choose Upload a template file and select the Altinity Cloud Formation Template YAML file as shown in Figure 2.

  2. Fill missing fields on Specify Stack Details page:

    • Set ‘Stack Name’ to: NOTE: (replace $USER and the $ENV_NAME as needed) altinitycloud-connect-$USER-$ENV_NAME

    • Set ‘Subnets’ where altinitycloud-connect EC2 instance(s) should be launched (Example: subnet-17c1674a, subnet-2d5c8855, subnet-e0d425aa)

    • Set the ‘Token presented by https://acm.altinity.cloud’ with a token value from Step 2.

  3. Important: At the last step of the wizard, checkmark the notice: “I acknowledge that AWS CloudFormation might create IAM resources with custom names

  4. Complete the wizard and submit the form.


EC2 background processing explained

The EC2 instance is processed in the background as follows:

  • EC2 instance gets started from the cloud formation template
  • EC2 gets connected to Altinity.Cloud using altinitycloud-connect
  • EKS cluster gets provisioned
  • EKS cluster gets connected to Altinity.Cloud using altinitycloud-connect

In Altinity.Cloud

  1. Select the ‘Proceed’ button in the connection wizard. NOTE: It is ok to select Proceed more than once, since provisioning takes some time. Once the EKS cluster is provisioned, wizard will switch to the ‘Resources Configuration’ page.

  2. Finish configuration of node pools as described in the Resource Configuration section.



Manual Provisioning of the EC2 instance

The AWS EC2 instance should meet the following requirements:

EC2 Instance Requirements

  • CPU: t2.micro minimum
  • OS: Ubuntu Server v20.04

Creating a Role with AIM policies

Set up a role with IAM policies to access IAM, EC2, VPC, EKS, S3 & Lambda as follows:

  arn:aws:iam::aws:policy/IAMFullAccess
  arn:aws:iam::aws:policy/AmazonEC2FullAccess
  arn:aws:iam::aws:policy/AmazonVPCFullAccess
  arn:aws:iam::aws:policy/AmazonS3FullAccess
  arn:aws:iam::aws:policy/AWSLambda_FullAccess
  arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore

NOTE:


Creating a policy for EKS full access

  1. Create a standard policy for EKS full access as follows:
{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "eks     :*"
         ],
         "Resource":"*"
      },
      {
         "Effect":"Allow",
         "Action":"iam:PassRole",
         "Resource":"*",
         "Condition":{
            "StringEquals":{
               "iam:PassedToService":"eks.amazonaws.com"
            }
         }
      }
   ]
}
  1. To set this instance to have access to the EC2 metadata and Internet, set the Security group to:

    • deny all inbound traffic
    • allow all outbound traffic

Installing Altinity.Cloud Connect

  1. Download altinitycloud-connect.

  2. Run the installer using the following terminal command. NOTE: The following example is for an Intel Linux installation.

    curl -sSL https://github.com/altinity/altinitycloud-connect/releases/download/v0.20.0/altinitycloud-connect-0.20.0-linux-amd64 -o altinitycloud-connect \
    && chmod a+x altinitycloud-connect \
    && sudo mv altinitycloud-connect /usr/local/bin/
    
  3. Login to Altinity.Cloud and get a connection token. NOTE: A cloud-connect.pem file is created in the current working directory.

    altinitycloud-connect login --token=<registration token>
    
  4. Connect to Altinity.Cloud:

    altinitycloud-connect --capability aws
    

Start EKS provisioning

The following data is required in order to create the VPC and EKS cluster properly:

  • The CIDR for the Kubernetes VPC (at least /21 recommended, e.g. 10.1.0.0/21)
  • The Number of Availability Zones (3 are recommended)

Please send this information to your Altinity support representative to start the EKS provisioning process. When completed, the Altinity Cloud Manager (ACM) will be updated then you can create your ClickHouse clusters.

The remainder of the provisioning process is handled by Altinity.Cloud. Users may switch back to ACM and wait for connection to be established in order to finish configuration.


In Altinity.Cloud

  1. Select the Proceed button in the connection wizard. You may repeat this step more than once to see if the connection has completed, since provisioning takes some time. Once the EKS cluster is provisioned, the connection wizard will switch to the Resources Configuration page.

  2. Finish the configuration of the node pools as described in the Resources Configuration section.



Break Glass Procedure

The “Break Glass” procedure allows Altinity access to EC2 instance with SSH, using AWS SSM in order to troubleshoot altinitycloud-connect that is running on this instance.

  1. Create an AnywhereAdmin IAM role with trust policy set:

    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Effect":"Allow",
             "Principal":{
                "AWS":"arn:aws:iam::313342380333:role/AnywhereAdmin"
             },
             "Action":"sts:AssumeRole"
          }
       ]
    }
    
  2. Add a permission policy set:

    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Effect":"Allow",
             "Action":"ssm:StartSession",
             "Resource":[
                "arn:aws:ec2:$REGION:$ACCOUNT_ID:instance/$INSTANCE_ID",
                "arn:aws:ssm:*:*:document/AWS-StartSSHSession"
             ]
          }
       ]
    }
    
  3. Send the following ARN string to Altinity: NOTE: This is used to revoke the Break Glass Procedure access change, or remove the permission policy.

    arn:aws:ec2:$REGION:$ACCOUNT_ID:instance/$INSTANCE_ID
    

4.2 - Google GKE Installation

How to install Altinity.Cloud Anywhere on Google Cloud GKE (Google Kubernetes Environment).

4.2.1 - Introduction

How to install Altinity.Cloud Anywhere on Google Cloud Platform Google Kubernetes Engine (GKE).

8 May 2023 · Read time 1 min

Overview - Google GKE Installation

This guide covers how to Altinity.Cloud Anywhere to install a Kubernetes ClickHouse environment on the Google Cloud Platform Google Kubernetes Engine (GKE).

This guide contains the following sections:

What Accounts do you need?

This page assume you have an Altinity.Cloud account and have requested an Anywhere environment, and have a developer Google Console environment set up.

Google Accounts

Altinity Accounts

Google API Permissions

In the Google Console, you must ENABLE the following SDKs for your project:

  • Compute Engine API
  • Kubernetes Engine API

Software Requirements

As a client computer (or cloud machine instance) is being used from the terminal to perform the installation instructions on this page, the following software items must first be installed and various configurations completed.

For the terminal

4.2.2 - Google Installation

How to install Google Components.

This section covers the setup and configuration of the Google command line software, configuration and how to create a Kubernetes container and GKE cluster.

4.2.2.1 - Installing GKE

Installing Google GKE from the terminal.

8 May 2023 · Read time 5 min

Introduction

This section covers the creation of a GKE Kubernetes container and cluster.

We start with this section with the assumption that you already have a Google Cloud account and know how to create a kubernetes environment. Included are links to installation sections that guide you through the process of installing the Google development environment. When you finish this section, you will be ready to use the Altinity Cloud Manager to provision and manage ClickHouse clusters.

Prerequisites Check that each of the items in the following list are complete:

Preparing your client computer for install Google GKE from the terminal includes the following sections:

After completing the preliminary setup, you are ready to set up an Altinity.Cloud ClickHouse connection to your Google environment:



Create Kubernetes Container

The networks create command creates a GKE Kubernetes container that to host a cluster.

For more information, check out the following Google GKE documentation.


To create a GKE Kubernetes container:

  1. Copy and paste the following command to your terminal. NOTE: This step takes a few minutes to complete.

    # Create Kubernetes 'kubernetes-1'
    gcloud compute networks create kubernetes-1 \
    --bgp-routing-mode regional \
    --subnet-mode custom
    

Figure 1 - The command to create kubernetes-1 and Google’s response.

Data


Create a Google cluster

The clusters create command creates a GKE cluster inside the Google kubernetes container. Altinity then uses this cluster to set up ClickHouse when you use the Connection Setup wizard.

Use your browser to review the Google Kubernetes console to see the new cluster.


To create a new GKE cluster:

  1. Copy and paste the following command to your terminal:

    # Create Cluster 'cluster-1' inside Kubernetes 'kubernetes-1'
    gcloud container clusters create cluster-1 \
    --region us-west1 \
    --node-locations us-west1-a,us-west1-b \
    --machine-type n2-standard-4 \
    --network kubernetes-1 \
    --create-subnetwork name=k-subnet-1 \
    --enable-ip-alias
    

Figure 2 - Running the clusters create command. The blue-highlighted items in the example screenshot are example values that you can alter by following Altinity recommendations.

Data

Credential Setup

The get-credentials command sets up your local config file so that kubectl commands are authorized to talk to the Google GKE.

Once the cluster is ready, use the following get-credentials command to allow kubectl to issue commands to Kubernetes. Highlighted in blue in the terminal screenshot is the name of the cluster cluster-1, the region us-west1, and the project name in this example any-test-gke.

NOTE: Figure 1 shows the project name as any-test-gke.


To authorize kubectl commands to access Google clusters:

  1. Copy and paste the following command to your terminal:

    gcloud container clusters get-credentials cluster-1  \
    --region   us-west1                                  \
    --project  my-project
    

Figure 3 - Running the Google get-credentials command so that kubectl commands are authorized to talk to the Google GKE environment.

Data


Altinity Cloud Manager Connection Setup

After completing the preliminary setup, you are ready to set up an Altinity.Cloud ClickHouse connection to your Google environment:

4.2.2.2 - Installing Google Components

Installing from the terminal.

8 May 2023 · Read time 3 min

Installing Google components

For Linux Debian (Ubuntu) installations, install the following terminal commands on your client computer, entering the commands one at a time. For more details on each line, jump to the specific sections on this page:

# Certificates Install
sudo apt-get install apt-transport-https ca-certificates gnupg
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list

# Install
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add –
sudo apt-get update && sudo apt-get install google-cloud-cli
sudo apt-get install google-cloud-sdk-gke-gcloud-auth-plugin

# Enable Services
gcloud services enable container.googleapis.com




Installation Details

This section includes example responses after each command.

Certificates Install

Certificates are installed to validate the software is from a trusted source and allow the use of software repositories.

# Certificates Install 1
sudo apt-get install apt-transport-https ca-certificates gnupg

# Response
# ----------
# Reading package lists... Done
# Building dependency tree       
# Reading state information... Done
# ca-certificates is already the newest version (20211016ubuntu0.20.04.1).
# gnupg is already the newest version (2.2.19-3ubuntu2.2).
# apt-transport-https is already the newest version (2.0.9).
# 0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded.

Add Google CLI packages

Add the gcloud CLI distribution URI as a package source to your local OS sources list. (Linux Debian example.)

# Certificates Install 2
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg]      \
    https://packages.cloud.google.com/apt cloud-sdk main" |     \
    sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
 
# Response
# ----------
# deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main
# ubuntu@ip-172-31-16-238:~$

Add Keyring

Downloads the Google Cloud keyring and installs it to /usr/share/keyrings/cloud.google.gpg.

curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add –

# Response
# ----------
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0gpg: can't open '': No such file or directory
100  1210  100  1210    0     0   6205      0 --:--:-- --:--:-- --:--:--  6237
(23) Failed writing body
ubuntu@ip-172-31-16-238:~$ 

Install google-cloud-cli

Installs the Google Cloud API software.

curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add –

# Response
# ----------
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0gpg: can't open '': No such file or directory
100  1210  100  1210    0     0   5960      0 --:--:-- --:--:-- --:--:--  5960
(23) Failed writing body
ubuntu@ip-172-31-16-238:~$ sudo apt-get update && sudo apt-get install google-cloud-cli
Hit:1 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal InRelease
Get:2 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]                                                                               
Get:3 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB]                                                                             
Hit:4 https://download.docker.com/linux/ubuntu bionic InRelease                                                                                                     
Get:5 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]                                                                                           
Get:6 https://packages.cloud.google.com/apt cloud-sdk InRelease [6361 B]                                                                               
Get:7 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages [2534 kB]
Get:8 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/universe amd64 Packages [1059 kB]
Get:9 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/universe amd64 c-n-f Metadata [24.2 kB]
Hit:10 https://packages.clickhouse.com/deb stable InRelease                      
Get:11 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages [2151 kB]
Get:12 https://packages.cloud.google.com/apt cloud-sdk/main amd64 Packages [438 kB]
Get:13 http://security.ubuntu.com/ubuntu focal-security/universe amd64 Packages [834 kB]    
Get:14 http://security.ubuntu.com/ubuntu focal-security/universe amd64 c-n-f Metadata [17.6 kB]
Fetched 7401 kB in 1s (5429 kB/s)                                  
Reading package lists... Done
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Suggested packages:
  google-cloud-cli-app-engine-java google-cloud-cli-app-engine-python google-cloud-cli-pubsub-emulator google-cloud-cli-bigtable-emulator google-cloud-cli-datastore-emulator kubectl
The following packages will be upgraded:
  google-cloud-cli
1 upgraded, 0 newly installed, 0 to remove and 5 not upgraded.
Need to get 154 MB of archives.
After this operation, 1508 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt cloud-sdk/main amd64 google-cloud-cli all 430.0.0-0 [154 MB]
Fetched 154 MB in 3s (56.6 MB/s)           
(Reading database ... 164975 files and directories currently installed.)
Preparing to unpack .../google-cloud-cli_430.0.0-0_all.deb ...
Unpacking google-cloud-cli (430.0.0-0) over (429.0.0-0) ...
Setting up google-cloud-cli (430.0.0-0) ...
Processing triggers for man-db (2.9.1-1) ...
ubuntu@ip-172-31-16-238:~$ 

Install gcloud auth plugin

Installs the Google auth plugin. Used to manage authentication between the Altinity client and the Google Kubernetes Engine.

sudo apt-get install google-cloud-sdk-gke-gcloud-auth-plugin

# Response
# ----------
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
  google-cloud-sdk-gke-gcloud-auth-plugin
0 upgraded, 1 newly installed, 0 to remove and 5 not upgraded.
Need to get 3129 kB of archives.
After this operation, 11.0 MB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt cloud-sdk/main amd64 google-cloud-sdk-gke-gcloud-auth-plugin amd64 430.0.0-0 [3129 kB]
Fetched 3129 kB in 0s (7282 kB/s)                               
Selecting previously unselected package google-cloud-sdk-gke-gcloud-auth-plugin.
(Reading database ... 165046 files and directories currently installed.)
Preparing to unpack .../google-cloud-sdk-gke-gcloud-auth-plugin_430.0.0-0_amd64.deb ...
Unpacking google-cloud-sdk-gke-gcloud-auth-plugin (430.0.0-0) ...
dpkg: error processing archive /var/cache/apt/archives/google-cloud-sdk-gke-gcloud-auth-plugin_430.0.0-0_amd64.deb (--unpack):
 trying to overwrite '/usr/lib/google-cloud-sdk/.install/gke-gcloud-auth-plugin.snapshot.json', which is also in package google-cloud-cli-gke-gcloud-auth-plugin 429.0.0-0
dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)
Errors were encountered while processing:
 /var/cache/apt/archives/google-cloud-sdk-gke-gcloud-auth-plugin_430.0.0-0_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
ubuntu@ip-172-31-16-238:~$ 

Enable Google API services for container

Enables Google API services for your project.

gcloud services enable container.googleapis.com

# Response
# ----------
# none

4.2.2.3 - Logging into Google Cloud

Installing from the terminal.

8 May 2023 · Read time 1 min

Introduction

For remote terminal installations, you need to login with your Google account. Use the same account you use to log into Google console.

Prerequisites

The workflow is:

  • You enter the gcloud auth login command from your terminal as shown in Figure 1.
  • Google provides you with a browser link.
  • You copy the Authorization code from the Google authentication page.
  • You paste the code into your terminal.
  • You are now authenticated and can check your status with the gcloud config list command.

To login to your Google account from a terminal:

  1. Log in with your Google account using the command:

    gcloud auth login
    
  2. The resulting link URL that is displayed, copy and paste it to your browser.

  3. Copy the authorization code string.

  4. Return to the terminal, and at the Enter authorization code prompt, paste in the string as shown in the following terminal screenshot.

  5. Check which account you are logged into with:

    gcloud config list
    

Figure 1 - Log into your Google account using gcloud auth login command on your terminal.

Data



4.2.2.4 - Setting the Google Project ID

How to use the Altinity Cloud Manager Connection Wizard to provision a ClickHouse-ready environment to your Google Cloud Platform Google (GCP) Kubernetes Engine (GKE).

8 May 2023 · Read time 1 min

Setting your Google Project ID

The Google Project ID manages which environment to use for tracking and billing purposes. This must be set first before you begin the connection and provisioning process.

Check your Google Project ID From your Google console, choose your project ID from the menu in your web browser and check to make sure that is selected in the terminal as follows:

gcloud config get-value project

#example value
any-test-gke

If your Google Project ID is NOT listed

If your Google Project ID is NOT listed, then list and set your Google project ID(s) by running:

gcloud projects list

# Example response
PROJECT_ID                      NAME                   PROJECT_NUMBER
any-test-gke                    any-test-gke           1234567890

# Set project ID example
gcloud config set project any-test-gke

4.2.3 - Altinity Cloud Manager Connection Setup

Altinity Cloud Manager Connection Setup.

The Altinity Cloud Manager includes a Connection Setup wizard that displays any time a new Environment is created that has not yet been connected. This section covers the use of the Connection Setup wizard, how to install the altinitycloud-connect command line software, and how to create a ClickHouse cluster and database.

4.2.3.1 - Altinity Connect Setup Wizard

How to use the Altinity Cloud Manager Connection Wizard to provision a ClickHouse-ready environment to your Google Cloud Platform Google (GCP) Kubernetes Engine (GKE).

8 May 2023 · Read time 4 min

Introduction

This section shows you how to create a secure connection between your Google GKE environment and the Altinity Cloud Manager using the Altinity.Cloud Anywhere Connection Setup wizard on your web browser.

  • Included is the free altinitycloud-connect software, which is a tunneling daemon software that is part of Altinity.Cloud Anywhere, which allows the Altinity Cloud Manager to communicate with your GKE-hosted ClickHouse cluster.
  • An altinitycloud-connect login token is provided for the connection
  • The provisioning step uses deployment script to configure your GKE environment

Prerequisites

Recommended

  • Use the watch command or k9s monitoring tool to view the progress of the altinity-cloud nodes as you start the connection and provisioning process.

Setting your Google Project ID

The Google Project ID manages which environment to use for tracking and billing purposes. This must be set first before you begin the connection and provisioning process.

Check your Google Project ID From your Google console, choose your project ID from the menu in your web browser and check to make sure that is selected in the terminal as follows:

gcloud config get-value project

#example value
any-test-gke

If your Google Project ID is NOT listed If your Google Project ID is NOT listed, then list and set your Google project ID(s) by running:

gcloud projects list

# Example response
PROJECT_ID                      NAME                   PROJECT_NUMBER
any-test-gke                    any-test-gke           1234567890

# Set project ID example
gcloud config set project any-test-gke

Check if altinitycloud-connect is installed

To verify you have altinitycloud-connect installed, run the following command:

altinitycloud-connect

# Example Response
0.20.0

# Installation location
where altinitycloud-connect
altinitycloud-connect is /usr/local/bin/altinitycloud-connect

Running the Connection Setup Wizard

In the Altinity Cloud Manager, the Connection Setup wizard is located in the Environments section of the ACM. This instruction assumes that you have either:

  • Asked Altinity to provide you with an Environment name (Example: gkeanywhere).
  • You have been given Altinity.Cloud Anywhere access and you can create your own environment name.

As shown in Figure 1, the Connection Setup wizard displays this screen when the selected environment (Example: gkeanywhere which is available from the top right Environment menu) is not yet a connection between the ACM and GKE.

The Connection Setup wizard provides 3 sections:

After that, you proceed to Creating a ClickHouse Cluster.

1 of 3 Connection Setup

As indicated in Figure 1, step 1 directs you down download and install altinitycloud-connect. Binaries for various systems are provided.

From the Altinity Cloud Manager Connection Setup page, select the green PROCEED button.

Data
Figure 1 - The Environments > Connection Setup screen.

  1. In the terminal window, copy the string from step 2 in the Connection Setup tab Connect to Altinity.Cloud box and paste the string into the terminal.

From the same Altinity.Anywhere environment, copy the next string and paste it into your terminal. This begins the provisioning process on your Mac.

altinitycloud-connect kubernetes | kubectl apply -f -

Response The response appears similar to the following:

namespace/altinity-cloud-system created
namespace/altinity-cloud-managed-clickhouse created
clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:storage-class-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:cloud-connect unchanged
serviceaccount/cloud-connect created
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:storage-class-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view unchanged
rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
secret/cloud-connect created
deployment.apps/cloud-connect created

2 of 3 Resources Configuration

Confirm the following settings then select the green PROCEED button:

Data
Figure 2 - The Resources Configuration screen.

  • Cloud Provider = GCP
  • Storage Classes = premium-rwo
  • Storage Classes = standard
  • Storage Classes = standard-rwo
  • Node Pools:
    • Zone = us-west1-b and Instance Type = n2-standard-4 (2)
    • Capacity = 10 GB (this is an example setting)
    • Used for: (checkmark each of these items)
      • ClickHouse (checked on)
      • Zookeeper (checked on)
      • System (checked on)

The tunneling daemon

3 of 3 Confirmation

Review the JSON data then select the green Finish button. Note: A message saying “Connection is not ready yet.” you can select “Continue waiting…” until the next screen appears.

Confirm the following settings then select the green FINISH button:

Data
Figure 3 - The Confirmation screen showing the Resources Specification JSON.

4.2.3.2 - Installing altinitycloud-connect

How to use the Altinity Cloud Manager Connection Wizard to provision a ClickHouse-ready environment to your Google Cloud Platform Google (GCP) Kubernetes Engine (GKE).

8 May 2023 · Read time 1 min

Install altinitycloud-connect

Altinity.Cloud Anywhere includes altinitycloud-connect, a tunneling daemon that creates a secure connection between Google GKE Kubernetes environmenet and the Altinity Cloud Manager.

Install altinitycloud-connect from the following links:

Version check

To verify you have altinitycloud-connect installed, run the following command:

altinitycloud-connect version

# Example Response
0.20.0

# Installation location
where altinitycloud-connect
altinitycloud-connect is /usr/local/bin/altinitycloud-connect

Command-line help

Running the altinitycloud-connect command with no parameters, displays the following options.

altinitycloud-connect

Usage:
  cloud-connect [flags]
  cloud-connect [command]

Available Commands:
  completion            Generate the autocompletion script for the specified shell
  kubernetes            Print Kubernetes manifest
  kubernetes-disconnect Print Kubernetes disconnect manifest
  login                 Log in
  version               Print version

Flags:
      --ca-crt string        /path/to/custom/ca.crt (defaults to $ALTINITY_CLOUD_CACERT)
      --capability strings   List of capabilities. Supported: aws, gcp, kubernetes (includes all by defaults)
      --debug-addr string    Address to serve /metrics & /healthz on (default ":0")
  -i, --input string         /path/to/cloud-connect.pem produced by login command (default "cloud-connect.pem")
  -u, --url string           URL to connect to (defaults to $ALTINITY_CLOUD_URL, and if not specified, to https://anywhere.altinity.cloud) (default "https://anywhere.altinity.cloud")

Use "cloud-connect [command] --help" for more information about a command.

4.2.3.3 - Creating a ClickHouse Cluster

How to create a new ClickHouse database cluster using the Altinity Cloud Manager inside your Google Cloud Kubernetes Environment (GKE).

7 May 2023 · Read time 2 min

Creating a ClickHouse Cluster

These section covers how to use the Altinity Cloud Manager to create a ClickHouse cluster your your Google GKE Kubernetes environment.

To create a cluster (see Figure 1 for reference):

  1. Use the The top left Environment menu to selects where your Google GKE environment is located. In this example as shown in Figure 1, the environment name is gkeanywhere.

  2. Select Clusters from the navigation menu.

  3. Select the LAUNCH CLUSTER blue button to launch the wizard.

  4. The cluster panel is created, called test-gcp-anyw.

  5. When the cluster has started, the status indicators shown in green will appear. These are nodes online and checks passed.

Data
Figure 1 - The Clusters dashboard showing the ClickHouse cluster named test-gcp-anyw created in your Google GKE Kubernetes environment.


First time creating a cluster

If this is the first time you are viewing the Altinity Cloud Manager Clusters page, there will be no clusters, the screen will appear as in shown in Figure 2. The following steps lead you through the screens displayed by the Cluster Wizard. To create a new ClickHouse Cluster using the Launch Cluster wizard):

Data
Figure 2 - The Clusters dashboard showing the ClickHouse cluster named test-gcp-anyw created in your Google GKE Kubernetes environment.

NOTE: Each of the 6 steps in the Wizard you can navigate back forth between the previously filled-in screens by selecting the title links on the left, or using the BACK and NEXT buttons.

To create a new ClickHouse cluster:

  1. From your web browser in the Altinity Cloud Manager, select Clusters.
  2. Select the LAUNCH CLUSTERS blue button.
  3. In step 1, the ClickHouse Setup screen, fill in the following and select the blue NEXT button:
    • Name = test-gcp-anyw (15-character limit, lower-case letters only)
    • ClickHouse Version = ALTINITY BUILDS: 22.8.15 Stable Build
    • ClickHouse User Name = admin
    • ClickHouse User Password = admin-password (example password) then select NEXT.
  4. In step 2, the Resources Configuration screen, fill in the following then select NEXT button:
    • Node Type = n2-standard-4 (CPU x4, RAM 13 GB)
    • Node Storage = 50 GB
    • Volume Type = premium-rwo
    • Number of Shards = 1 then select NEXT.
  5. In step 3, the High Availability Configuration screen, fill in the following then select NEXT:
    • Number of Replicas = 1
    • Zookeeper Configuration = Dedicated
    • Zookeeper Node Type = default
    • Backup Schedule = Monthly, Day of Week/Month = 1, Time (GMT) = 05:00 AM, Backups to Keep = 7
    • Number of Backups to keep = 0 (leave blank) then select NEXT.
  6. In step 4, Connection Configuration screen, fill in the following then select NEXT:
    • Endpoint = test-gcp-anyw.aws-environment-name.altinity.cloud (autofilled)
    • Use TLS = Checked
    • Load Balancer Type = Altinity Edge Ingress
    • Protocols: Binary Protocol (port:9440) - is checked ON
    • Protocols: HTTP Protocol (port:8443) - is checked ON
    • Datadog integration = disabled (greyed out, ask Altinity to enable)
    • IP restrictions = OFF (Enabled is unchecked)
  7. In step 5, Uptime Schedule screen, select ALWAYS ON then select NEXT.
  8. In step 6, the final screen Review & Launch, select the green LAUNCH button.

Your new ClickHouse Cluster will start building, and will complete with the green boxes under your cluster name test-gcp-anyw;

  • 1 / 1 nodes online
  • Health: 6/6 checks passed

4.2.3.4 - Creating a ClickHouse Database

How to use the Altinity Cloud Manager (ACM) to create a ClickHouse database on a Google Kubernetes (GKE) cluster.

7 May 2023 · Read time 3 min

Introduction

In this section you will create a ClickHouse database and tables on your Google GKE-cluster using the ACM. You will then use your cluster’s Explore menu in the ACM to run the database-creation scripts and queries. Finally, you will use the clickhouse-client command line tool from your local terminal using the Connection Details string to test data-retrieval queries.

Creating a ClickHouse Database

The following steps you will use the clusters EXPLORE menu in the Query tab.

Data
Figure 1 - Using the Cluster > EXPLORE > Query tab to create and query ClickHouse databases and tables.


To create a new database on your Altinity.Cloud Anywhere cluster from the ACM:

  1. Login to the ACM and select Clusters, then select EXPLORE on your cluster.
  2. In the Query text box, enter the following CREATE TABLE SQL query:
CREATE TABLE IF NOT EXISTS events_local ON CLUSTER '{cluster}' (
    event_date  Date,
    event_type  Int32,
    article_id  Int32,
    title       String
) ENGINE = ReplicatedMergeTree('/clickhouse/{cluster}/tables/{shard}/{database}/{table}', '{replica}')
    PARTITION BY toYYYYMM(event_date)
    ORDER BY (event_type, article_id);
  1. Create a second table:
CREATE TABLE events ON CLUSTER '{cluster}' AS events_local
   ENGINE = Distributed('{cluster}', default, events_local, rand())
  1. Add some data with this query:
INSERT INTO events VALUES(today(), 1, 13, 'Example');
  1. List the data you just entered:
SELECT * FROM events;

# Response
test-anywhere-6.johndoetest-a123.altinity.cloud:8443 (query time: 0.196s)
┌─event_date─┬─event_type─┬─article_id─┬─title───┐
│ 2023-03-24 │          113 │ Example │
└────────────┴────────────┴────────────┴─────────┘
  1. Show all the tables:
show tables

# Response
test-anywhere-6.johndoetest-a123.altinity.cloud:8443 (query time: 0.275s)
┌─name─────────┐
│ events       │
│ events_local │
└──────────────┘

Testing ClickHouse on your local terminal

This section shows you how to use your local computer terminal to login to your Clickhouse Cluster that you created in the Altinity Cloud Manager.

Prerequisite

Connection String

The connection string comes from your cluster (Example: test-gcp-anyw) Connection Details link. The Copy/Paste for client connections string highlighted in red in Figure 2 is used in your terminal (you supply the password; Example: adminpassword)

Data
Figure 2 - Using the Cluster > EXPLORE > Query tab to create and query ClickHouse databases and tables.

  1. Find your pod name:
kubectl -n altinity-cloud-managed-clickhouse get all

# Response
NAME                                               READY   STATUS    RESTARTS        AGE
pod/chi-test-anywhere-6-johndoe-anywhere-6-0-0-0   2/2     Running   8 (3h25m ago)   2d17h
  1. On your command line terminal, login to that pod using the name you got from step 1:
kubectl -n altinity-cloud-managed-clickhouse exec -it pod/chi-test-anywhere-6-johndoe-anywhere-6-0-0-0 -- bash

# Response
Defaulted container "clickhouse-pod" out of: clickhouse-pod, clickhouse-backup
clickhouse@chi-test-anywhere-6-johndoe-anywhere-6-0-0-0:/$ 
  1. Login to your ClickHouse database using the clickhouse-client command to get the :) happy face prompt:
clickhouse@chi-test-anywhere-6-johndoe-anywhere-6-0-0-0:/$ 
clickhouse@chi-test-anywhere-6-johndoe-anywhere-6-0-0-0:/$ clickhouse-client

# Response
<jemalloc>: MADV_DONTNEED does not work (memset will be used instead)
<jemalloc>: (This is the expected behaviour if you are running under QEMU)
ClickHouse client version 22.8.13.21.altinitystable (altinity build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 22.8.13 revision 54460.

test-anywhere-6 :) 
  1. Run a show tables sql command:
test-anywhere-6 :) show tables

# Response

SHOW TABLES

Query id: da01133d-0130-4b98-9090-4ebc6fa4b568

┌─name─────────┐
│ events       │
│ events_local │
└──────────────┘

2 rows in set. Elapsed: 0.013 sec.  
  1. Run SQL query to show data in the events table:
test-anywhere-6 :) SELECT * FROM events;

# Response

SELECT * 
FROM events

Query id: 00fef876-e9b0-44b1-b768-9e662eda0483

┌─event_date─┬─event_type─┬─article_id─┬─title───┐
│ 2023-03-24 │          113 │ Example │
└────────────┴────────────┴────────────┴─────────┘

1 row in set. Elapsed: 0.023 sec. 

test-anywhere-6 :) 

Review the following database creation and query instructions:

4.2.4 - Appendix

Reference section for Google-related resources.

4.2.4.1 - Google Web Console Pages

How to install Altinity.Cloud Anywhere on Google Cloud Platform Google Kubernetes Engine (GKE).

4 May 2023 · Read time 2 min

Google Project ID

The Google Console Home is where a NEW PROJECT is created.

The screenshot of the Google Console Welcome page is shown in Figure 1. Points of interest are marked in red.

  • The menu at the top showing proj-anywhere-gke is where you can switch to different projects
  • Clicking on the proj-anywhere-gke link is where you create a new project or select other projects
  • The Create a GKE cluster is the web console method of creating a Google GKE cluster
  • The Billing button is where you must set up a credit card for your project
  • The Kubernetes Engine section is where the terminal-created Kubernetes network will appear
  • the Compute Engine section is where the nodes in your Altinity-created ClickHouse cluster appears.
  • the Compute Engine section is where the nodes in your Altinity-created ClickHouse cluster appears.

Data
Figure 1 - Google GKE Kubernetes Console web page.

A Google Project ID is first create in your Google Console. Create a Google Project with the NEW PROJECT button, and set up Billing.

Once you have a project name, you can select that in the terminal and complete the steps for creating a Kubernetes network and starting cluster that you will then connect to the Altinity.Cloud.



Kubernetes Engine

From the home page, when you select the Kubernetes Engine button, you will see a page that displays the cluster will be created from the terminal after following the instructions on this page.

Data
Figure 2 - The Google Kubernetes Engine page shows the installed instance of the Altinity-installed Kubernetes.



Compute Engine

From the home page, when you select the Compute Engine button, you will see a page that displays all the nodes created from the terminal. The names matches what you see in the k9s monitoring windows that view the Altinity and ClickHouse Kubernetes namespaces.

Data
Figure 3 - The Google Compute Engine page shows the installed instances of the Altinity ClickHouse Nodes.

4.2.4.2 - kubectl Commands

Installing from the terminal.

8 May 2023 · Read time 8 min

Installing kubectl

To use gcloud to install kubectl according to the Google GKE instructions:

gcloud components install kubectl
sudo apt-get install kubectl

Use curl to install kubectl according to the Kubernetes website:

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

kubectl cluster-info dump

This is an expanded listing using the cluster info command (many hundreds of lines).

kubectl cluster-info dump

# Example response for a very long cluster information dump
# -----------------------------------------------------------
# {
#     "kind": "NodeList",
#     "apiVersion": "v1",
#     "metadata": {
#         "resourceVersion": "8685921"
#    },
#     "items": [
#         {
#             "metadata": {
#                 "name": "gke-cluster-1-default-pool-36e9706c-0fxb",
#                 "uid": "0b89edcc-d46b-4783-84f9-a7672f0bd922",
# ...
# ... several hundred lines
# ...
# 13. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
#  (version 21.8.10.1.altinitystable (altinity build))
# 2023.04.14 06:24:31.919921 [ 115 ] {} <Debug> DNSResolver: Updated DNS cache
# 2023.04.14 06:24:35.915268 [ 54 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 97.44 GiB.
# ==== END logs for container clickhouse of pod default/chi-first-first-1-1-0 ====

kubectl exec - Enter a ClickHouse pod

To enter the pod and run the ClickHouse client directly, first locate the nodename of the cluster using the watch or k9s or find it from the ACM.

kubectl exec -it chi-first-first-0-0-0 -- bash

# You are now inside the pod, run a list command:
root@chi-first-first-0-0-0:/# ls

## bin  boot  cloud-connect.pem  dev  docker-entrypoint-initdb.d  entrypoint.sh  etc  home  kubectl  kubectl.sha256  lib  lib32  lib64  libx32  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

# To exit out of the pod
exit
# ubuntu@ip-123-45-67-890:~$

kubectl get ns

This lists the currently registered Kubernetes namespaces in the current cluster-1 using the kubectl get ns command.

Data
Figure 3 - Running the kubectl get ns command to list all of the namespaces.


kubectl get pod

List the CPU pods in a ClickHouse cluster.

kubectl get pod -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName

# Example response to list pods
# ------------------------------
# NAME                    STATUS    NODE
# chi-first-first-0-0-0   Running   gke-cluster-1-default-pool-36e9706c-xj7p
# chi-first-first-0-1-0   Running   gke-cluster-1-default-pool-aa3988ca-nth7
# chi-first-first-1-0-0   Running   gke-cluster-1-default-pool-36e9706c-0fxb
# chi-first-first-1-1-0   Running   gke-cluster-1-default-pool-36e9706c-wrbm

kubectl get pvc

List the storage volumes.

kubectl get pvc

# Example response to list volumes
# --------------------------------
# NAME                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
# pd-ssd-chi-first-first-0-0-0   Bound    pvc-5bc72a03-2ae5-41d1-9e93-92b92829c435   100Gi      RWO            premium-rwo    8d
# pd-ssd-chi-first-first-0-1-0   Bound    pvc-ec8f143d-c51d-4125-938a-76ad103fb7f2   100Gi      RWO            premium-rwo    8d
# pd-ssd-chi-first-first-1-0-0   Bound    pvc-014d010b-d282-4b47-91ef-b332bd381a28   100Gi      RWO            premium-rwo    8d
# pd-ssd-chi-first-first-1-1-0   Bound    pvc-c11d5819-1935-4b6f-ad54-60fa196fe013   100Gi      RWO            premium-rwo    8d

kubectl get all -n zoo1ns

To list the Zookeeper nodes:

kubectl get all -n zoo1ns

# Example response to list zookeeper nodes and services
# -----------------------------------------------------
# NAME              READY   STATUS    RESTARTS   AGE
# pod/zookeeper-0   1/1     Running   0          8d
# 
# NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)             AGE
# service/zookeeper    ClusterIP   10.72.131.73   <none>        2181/TCP,7000/TCP   8d
# service/zookeepers   ClusterIP   None           <none>        2888/TCP,3888/TCP   8d
# 
# NAME                         READY   AGE
# statefulset.apps/zookeeper   1/1     8d

kubectl describe storageclass

List Storage Classes. (See ACM 》 Resources Configuration 》 Storage Classes )

kubectl describe storageclass

# Example response to list all the storage
# ----------------------------------------
# Name:                  premium-rwo
# IsDefaultClass:        No
# Annotations:           components.gke.io/component-name=pdcsi,components.gke.io/component-version=0.13.7,components.gke.# io/layer=addon
# Provisioner:           pd.csi.storage.gke.io
# Parameters:            type=pd-ssd
# AllowVolumeExpansion:  True
# MountOptions:          <none>
# ReclaimPolicy:         Delete
# VolumeBindingMode:     WaitForFirstConsumer
# Events:                <none>
# 
# 
# Name:                  standard
# IsDefaultClass:        No
# Annotations:           components.gke.io/layer=addon,storageclass.kubernetes.io/is-default-class=false
# Provisioner:           kubernetes.io/gce-pd
# Parameters:            type=pd-standard
# AllowVolumeExpansion:  True
# MountOptions:          <none>
# ReclaimPolicy:         Delete
# VolumeBindingMode:     Immediate
# Events:                <none>
# 
# 
# Name:                  standard-rwo
# IsDefaultClass:        Yes
# Annotations:           components.gke.io/layer=addon,storageclass.kubernetes.io/is-default-class=true
# Provisioner:           pd.csi.storage.gke.io
# Parameters:            type=pd-balanced
# AllowVolumeExpansion:  True
# MountOptions:          <none>
# ReclaimPolicy:         Delete
# VolumeBindingMode:     WaitForFirstConsumer
# Events:                <none>

kubectl config view

To verify the config file is updated with the correct credentials, review it by running the kubctl config view command.

Data
Figure 1 - Running the kubctl config view.svg command to verify that the config file is updated with credentials.


gcloud container clusters list

This lists the current container information with the gcloud container clusters list command.

Data
Figure 2 - Running the gcloud container clusters list command.


kubectl clusterinfo

Run the kubectl clusterinfo command to list the Kubernetes control plan and services. At this point the Google set up is complete. Now you can use Altinity.Cloud Anywhere to connect Google GKE to the Altinity Cloud Manager.

Data
Figure 4 - Running the kubectl clusterinfo command.

kubectl version

The computer or cloud compute instance that you use to communicate with Google Cloud requires installation of the Google CLI and Kubernetes.

The following list of software needs to be installed:

  • kubectl (kubectl get namespaces)

version checks that some items you do not have

Checking Versions To make sure the prerequisites have been met, check the versions of the installed software.

# Version checks
kubectl version --short        # v1.27.1
cat /etc/os-release            # Ubuntu 20.04
altinitycloud-connect version  # Altinity 0.20.0
gcloud version                 # Google Cloud SDK 429.0.0
kubectl version --short
# Client Version: v1.26.3
# Kustomize Version: v4.5.7
# Unable to connect to the server: net/http: TLS handshake timeout

# Another variation to display version
kubectl version  --output=yaml

4.2.4.3 - Miscellaneous terminal commands

How to install Altinity.Cloud Anywhere on Google Cloud Platform Google Kubernetes Engine (GKE).

8 May 2023 · Read time 1 min

Check the OS version

To to terminal software installations, you will need to know what operating system is being used so you can choose the correct binaries.

In the Google CLI console, Check the OS Version

cat /etc/os-release

# Response from Google console
-----------
# PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
# NAME="Debian GNU/Linux"
# VERSION_ID="11"
# VERSION="11 (bullseye)"
# VERSION_CODENAME=bullseye
# ID=debian
# HOME_URL="https://www.debian.org/"
# SUPPORT_URL="https://www.debian.org/support"
# BUG_REPORT_URL="https://bugs.debian.org/"

# Response from Ubuntu SSH Terminal
-----------
# PRETTY_NAME="Ubuntu 22.04.2 LTS"
# NAME="Ubuntu"
# VERSION_ID="22.04"
# VERSION="22.04.2 LTS (Jammy Jellyfish)"
# VERSION_CODENAME=jammy
# ID=ubuntu
# ID_LIKE=debian
# HOME_URL="https://www.ubuntu.com/"
# SUPPORT_URL="https://help.ubuntu.com/"
# BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
# PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
# UBUNTU_CODENAME=jammy

altinitycloud-connect kubernetes

Displays Kubernetes roles and resources that the kubect apply will use run the following command and save the output from your terminal as a text file to read.

altinitycloud-connect kubernetes

# Response example
---------------
# apiVersion: v1
# kind: Namespace
# metadata:
#   name: altinity-cloud-system
# ---
# apiVersion: v1
# kind: Namespace
# metadata:
#   name: altinity-cloud-managed-clickhouse
# ---
# apiVersion: rbac.authorization.k8s.io/v1
# kind: ClusterRole
#
# ... several more lines
# 
#         name: cloud-connect
#         volumeMounts:
#         - mountPath: /etc/cloud-connect
#           name: secret
#       serviceAccountName: cloud-connect
#       volumes:
#       - name: secret
#         secret:
#           secretName: cloud-connect

Watch - Real-Time Monitoring

Use a watch command when you want to monitor node activity of the altinity-cloud namespaces in real time. This is useful for installations that are taking a long time, and you wish to watch the provisioning process.

Run the watch commands on the two altinity-cloud prefixed namespaces using the following commands:

watch -n altinity-cloud-system              get all 
watch -n altinity-cloud-managed-clickhouse  get all
watch kubectl -n altinity-cloud-system get all

# Example response
# ---------------------------

Every 2.0s: kubectl -n altinity-cloud-system get all                     john.doe-MacBook-Pro.local: Sun Mar 19 23:03:18 2023

NAME                                      READY   STATUS    RESTARTS   AGE
pod/cloud-connect-d6ff8499f-bkc5k         1/1     Running   0          10h
pod/crtd-665fd5cb85-wqkkk                 1/1     Running   0          10h
pod/edge-proxy-66d44f7465-t9446           2/2     Running   0          10h
pod/grafana-5b466574d-vvt9p               1/1     Running   0          10h
pod/kube-state-metrics-58d86c747c-7hj79   1/1     Running   0          10h
pod/node-exporter-762b5                   1/1     Running   0          10h
pod/prometheus-0                          1/1     Running   0          10h
pod/statuscheck-f7c9b4d98-2jlt6           1/1     Running   0          10h

NAME                          TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                       AGE
service/edge-proxy            ClusterIP      10.109.2.17      <none>        443/TCP,8443/TCP,9440/TCP                     10h
service/edge-proxy-lb         LoadBalancer   10.100.216.192   <pending>     443:31873/TCP,8443:32612/TCP,9440:31596/TCP   10h
service/grafana               ClusterIP      10.108.24.91     <none>        3000/TCP                                      10h
service/prometheus            ClusterIP      10.102.103.141   <none>        9090/TCP                                      10h
service/prometheus-headless   ClusterIP      None             <none>        9090/TCP                                      10h
service/statuscheck           ClusterIP      10.101.224.247   <none>        80/TCP                                        10h

NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/node-exporter   1         1         1       1            1           <none>          10h

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/cloud-connect        1/1     1            1           10h
deployment.apps/crtd                 1/1     1            1           10h
deployment.apps/edge-proxy           1/1     1            1           10h
deployment.apps/grafana              1/1     1            1           10h
deployment.apps/kube-state-metrics   1/1     1            1           10h
deployment.apps/statuscheck          1/1     1            1           10h

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/cloud-connect-d6ff8499f         1         1         1       10h
replicaset.apps/crtd-665fd5cb85                 1         1         1       10h
replicaset.apps/edge-proxy-66d44f7465           1         1         1       10h
replicaset.apps/grafana-5b466574d               1         1         1       10h
replicaset.apps/grafana-6478f89b7c              0         0         0       10h
replicaset.apps/kube-state-metrics-58d86c747c   1         1         1       10h
replicaset.apps/statuscheck-f7c9b4d98           1         1         1       10h

NAME                          READY   AGE
statefulset.apps/prometheus   1/1     10h

Figure 1 - The watch monitoring window for the namespaces altinity-cloud-system listing each node name, IP address, and the run status.

watch kubectl -n altinity-cloud-managed-clickhouse get all

# Example response
# ---------------------------

Every 2.0s: kubectl -n altinity-cloud-managed-clickhouse get all        john.doe-MacBook-Pro.local: Mon Mar 20 00:14:44 2023

NAME                                            READY   STATUS    RESTARTS   AGE
pod/chi-test-anywhere-6-test-anywhere-6-0-0-0   2/2     Running   0          11h
pod/clickhouse-operator-996785fc-rgfvl          2/2     Running   0          11h
pod/zookeeper-5244-0                            1/1     Running   0          11h

NAME                                              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/chi-test-anywhere-6-test-anywhere-6-0-0   ClusterIP   10.98.202.85    <none>        8123/TCP,9000/TCP,9009/TCP   11h
service/clickhouse-operator-metrics               ClusterIP   10.109.90.202   <none>        8888/TCP                     11h
service/clickhouse-test-anywhere-6                ClusterIP   10.100.48.57    <none>        8443/TCP,9440/TCP            11h
service/zookeeper-5244                            ClusterIP   10.101.71.82    <none>        2181/TCP,7000/TCP            11h
service/zookeepers-5244                           ClusterIP   None            <none>        2888/TCP,3888/TCP            11h

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/clickhouse-operator   1/1     1            1           11h

NAME                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/clickhouse-operator-996785fc   1         1         1       11h

NAME                                                       READY   AGE
statefulset.apps/chi-test-anywhere-6-test-anywhere-6-0-0   1/1     11h
statefulset.apps/zookeeper-5244                            1/1     11h

Figure 2 - The watch monitoring window for the namespace altinity-cloud-managed-clickhouse, listing each node name, IP address, and the run status.

K9S Real-Time Monitoring

K9s is similar to the watch command for monitoring nodes in real time, but displayed in color and in a smaller interactive window, K9S is a free utility that lets you monitor in real time the progress of a provisioning installation.

To open monitoring windows for each altinity-cloud namespaces, open a new terminal instance and run the k9s command:

k9s -n altinity-cloud-system
k9s -n altinity-cloud-managed-clickhouse

Data
Figure 3 - The K9S monitoring windows for the two namespaces altinity-cloud-system and altinity-cloud-managed-clickhouse listing each node name, IP address, and the run status.

4.2.4.4 - Maintenance Tasks

How to install Altinity.Cloud Anywhere on Google Cloud Platform Google Kubernetes Engine (GKE).

8 May 2023 · Read time 4 min

Introduction

This section lists adminstration tasks to:

How to rescale a GKE-hosted ClickHouse Cluster using the Altinity Cloud Manager

To see detail instructions with screenshots to rescale your GKE cluster using the Altinity Cloud Manager cluster tools, follow the instructions on this page:

Use the Altinity Cloud Manager menu in your cluster: Actions 》Rescale to change:

  • CPU
  • Node Storage size
  • Volumes
  • Number of Shards
  • Number of Replicas

To rescale your GKE cluster using the Altinity Cloud Manager cluster tools:

  1. Select Clusters from the ACM left pane then select a running cluster to rescale.
  2. Select the menu ACTIONS 》Rescale item.
  3. In the Rescale Cluster window, adjust the following settings as needed in the column labelled Desired:
  • Number of Shards (Example: 2)
  • Number of Replicas (Example: 2)
  • Node Type (Example: n2d-standard-32)
  • Node Storage (GB) > (Example: 50)
  • Number of Volumes > (Example: 2)
  1. Select OK, then CONFIRM at the Rescale Confirmation window.
  2. Confirm that the new values appear in your cluster dashboard panel. NOTE: Cluster Node Storage size may not be decreased, only increased by at least 10%.


How to reset your Anywhere environment

Resetting your Altinity.Cloud anywhere ClickHouse cluster from the ACM and your GKE environment will let you create a new connection.

In the Environment section, selecting your Anywhere environment name displays the Connection Setup wizard.

use the ACM Reset Anywhere function, then run the terminal commands to delete the ClickHouse

  1. In the ACM, select Environments from the left-hand navigation pane.
  2. From the environment menu located beside your login name at the top right of the ACM, select your environment name.
  3. In the ACTION menu, select Reset Anywhere.

The result is that you will see the Anywhere Connection Setup screen and provisioning wizard that shows you the connection string to copy and paste to deploy a new Anywhere environment.

How to delete your Anywhere cluster

This section covers how to delete your GKE cluster using the ACM’s Reset Anywhere function, then removing the altinity-cloud namespaces from your GKE environment.

Check your namespaces to confirm that the altinity-cloud namespaces are present.

kubectl get ns

NAME                                STATUS   AGE
altinity-cloud-managed-clickhouse   Active   12d
altinity-cloud-system               Active   12d
default                             Active   12d
kube-node-lease                     Active   12d
kube-public                         Active   12d
kube-system                         Active   12d

To delete ClickHouse services and altinity-cloud namespaces, run the following commands in sequence:

kubectl delete chi --all -n altinity-cloud-managed-clickhouse
kubectl delete ns altinity-cloud-managed-clickhouse
kubectl delete ns altinity-cloud-system

Check the namespace to verify the two with the prefix altinity-cloud are deleted.

kubectl get ns
NAME                                STATUS   AGE
default                             Active   12d
kube-node-lease                     Active   12d
kube-public                         Active   12d
kube-system                         Active   12d

More Information

How to delete your GKE Cluster

From your glcoud terminal, delete your GKE cluster as follows:

gcloud container clusters list
cluster-1  us-west1  1.24.10-gke.2300  35.230.115.228  n2-standard-4  1.24.10-gke.2300  6          RUNNING
gcloud container clusters delete cluster-1 --zone=us-west1


More Information

4.2.4.5 -

type: post keywords:

  • Altinity.Cloud Anywhere weight: 4 draft: false

Overview of the Rescale operation

This page shows how the Altinity Cloud Manager with an Altinity.Anywhere installation to remotely rescale a customer’s on-prem cluster.

Select a cluster, then use the Actions > Rescale menu to bring up the Rescale Cluster window, then in the Desired Cluster Size, change the Number of Shards from 1 to 2 then press OK, then CONFIRM.

Altinity Anywhere Overview
Figure 1 - Selecting Actions > Rescale from the cluster to modify.


Altinity Anywhere Overview
Figure 2 - Changing the number of Shards from 1 to 2.


Altinity Anywhere Overview
Figure 3 - Rescale confirmation.


Altinity Anywhere Overview
Figure 4 - Nodes in the process of rescaling.

Verify rescale from the terminal

The Ubuntu host the Kubernetes installation is installed on shows the various commmands use to verify the changes made from the ACM.

The nodes online pill box will show grey, 2/4 nodes online, then after several minutes, turn green showing 4/4 nodes online. If you do not see the grey 2/4 nodes online, and the nodes online is green and shows 2/2 nodes online, try the rescale operation again.


Use the command kubectl -n altinity-cloud-managed-clickhouse... showing the Altinity clusters before the rescale operation.

Altinity Anywhere Overview
Figure 5 - Kubernetes command kubectl -n <Altinity cluster name> running on-prem, that is managed from also by the ACM.


Altinity Anywhere Overview
Figure 6 - The newly added nodes …-demo-1-0-0 after the rescale operation are now listed, showing Pending.

Ubuntu command kubectl get nodes before the rescale operation.

Altinity Anywhere Overview
Figure 7 - Kubernetes command kubectl get nodes shows all the nodes on the Altinity ClickHouse cluster.


Altinity Anywhere Overview
Figure 8 - The pending node is added as 192.168.149.238 and is spinning up.


The newly spun up shard in cluster-x now reads 4/4 nodes online.

Altinity Clusters ACM
Figure 9 - The Altinity Cloud Manager showing the remotely managed cluster-y with 4/4 nodes online.

4.3 - Minikube Installation (for test or development only)

How to install Altinity.Cloud Anywhere on Minikube. For testing and development use only.

24 April 2023 · Read time 30 min

Overview - Minikube Installation (for testing and development use only)

This guide covers the installation of Minikube in your own Kubernetes environment by using Altinity.Cloud Anywhere to do the provisioning. Any computer or cloud instance that can run Kubernetes and Minikube will work. Note that while Minicube is ok to use for development purposes, it should not be used for production.

These instructions have been tested on:

  • Ubuntu 22.04 server
  • Windows 10 with WS2 Ubuntu 20.04
  • VMWare running Ubuntu on Intel & M1 ARM
  • M1 Silicon Mac running Monteray (v12.6.3) and Ventura (v13.3.1)
  • Intel Mac running Big Sur (v11.7.4)

Requirements

The following Altinity.Cloud service subscriptions are needed:

Server requirements Minikube needs a minimum of 2 processors. Allocate RAM and disk space to accommodate your clusters. Check the values by running the terminal commands (Example: lscpu).

  • Minimum of 2 CPU (lscpu, or sysctl -a [for Mac])
  • Minimum 8 GB RAM (grep MemTotal /proc/meminfo )
  • 30 GB disk space ( df -h)

The following software must first be installed on your Minikube installation:

The following software is installed as part of the provisioning:

Installation

From a terminal, check the versions of all the installed software by running each command in turn.

Checking versions To make sure you have the required software installed, check the versions for each using the following commands:

docker --version
docker-machine --version
docker-compose --version
minikube version
kubectl version -o json
watch --version
k9s version 
altinitycloud-connect version

Starting Minikube

From the terminal, run the command:

minikube start

Linux ARM Ubuntu 22.04 This is Minikube’s response from an Ubuntu 22.04 server running on ARM:

# minikube start

😄  minikube v1.30.1 on Ubuntu 22.04 (arm64)
✨  Using the qemu2 driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing qemu2 VM for "minikube" ...
🐳  Preparing Kubernetes v1.26.3 on Docker 20.10.23 ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Linux ARM Apple Macintosh M1 This is Minikube’s response from a Mac running Ventura:

# minikube start

😄  minikube v1.29.0 on Darwin 13.2.1 (arm64)
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🏃  Updating the running docker "minikube" container ...
🐳  Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Linux Windows Intel This is Minikube’s response from a Microsoft Windows system running Ubuntu:

# minikube start

😄  minikube v1.30.1 on Ubuntu 20.04
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.26.3 on Docker 23.0.2 ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" 

Checking Minikube’s status If you are not sure if Minikube is already running, run a status check as follows:

minikube status
# minikube
# type: Control Plane
# host: Running
# kubelet: Running
# apiserver: Running
# kubeconfig: Configured

Checking the Kubernetes kubectl command This step checks that the kubectl command works on your Minikube host. Running the kubectl get ns command lists the namespaces that are currently running on your Minikube server.

Run the kubectl namespace list command:

kubectl get ns

# Example response:
# -------------------
# NAME              STATUS   AGE
# default           Active   15d
# kube-node-lease   Active   15d
# kube-public       Active   15d
# kube-system       Active   15d

Altinity Connection Setup

To start the Connection Setup:

  1. From the Altinity Cloud Manager, select the Environments section, then make sure you are in the correct environment by selecting it from the menu located at the top right of the screen.

  2. In Figure 1 the Connection Setup step 2, Connect to Altinity.Cloud text box, select all the text.

  3. In your Minikube terminal, copy and paste the text and press the return key. A command prompt appears immediately.

altinitycloud-connect login --token=
eyJhbGciOiJSUzI1Ni        808 characters           Rpbml0eS5jbG91ZCIsImV4cCI6MTY3
OTMzNzMwMywic3ViIjoicm1rYzIzdGVzdC1kNDgxIn0.tODyYF8WnTSx6mbAZA5uwW176... cont.

Example altinitycloud-connect login token string from the Altinity Cloud Manager Connection Setup wizard step 2, Connect to Altinity.Cloud.

Starting the Provisioning

From Figure 1, in the Connection Setup screen step 3, Deploy connector to your Kubernetes cluster, copy the string and paste it into your terminal. This begins the provisioning process inside your Minikube Kubernetes environment.

altinitycloud-connect kubernetes | kubectl apply -f -

Response The response appears similar to the following:

namespace/altinity-cloud-system created
namespace/altinity-cloud-managed-clickhouse created
clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:storage-class-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:cloud-connect unchanged
serviceaccount/cloud-connect created
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:storage-class-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view unchanged
rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
secret/cloud-connect created
deployment.apps/cloud-connect created

1 of 3 Connection Setup

From the Altinity Cloud Manager Connection Setup page, select the green PROCEED button.

Data
Figure 1 - The Environments > Connection Setup screen.

2 of 3 Resources Configuration

Confirm the following settings, then select the green PROCEED button:

Data
Figure 2 - The Resources Configuration screen.


NOTE: In Figure 2, if the table for the Node Pools section does not include a row for your Minikube server, select the ADD NODE POOL button and add the Zone name and Instance Type name and Capacity, and check each of the Used For checkboxes as shown.

  • Cloud Provider = Not Specified
  • Storage Classes = Standard
  • Node Pools:
    • Zone = minikube-zone-a
    • Instance Type = minikube-node
    • Capacity = 10 GB (this is an example setting)
    • Used for: (checkmark each of these items)
      • ClickHouse (checked on)
      • Zookeeper (checked on)
      • System (checked on)
    • Tolerations = dedicated=clickhouse:NoSchedule

3 of 3 Confirmation

In Figure 3, the Confirmation tab displays the Resources Specifications text box. Review these values and correct them if necessary by selecting the Resources Configuration tab to make changes.

To complete the Connection Setup wizard:

  1. Select the green Finish button.

    • A progress bar and message appear: “Connection is not ready yet.
  2. Select “Continue waiting…” until the next screen appears.

Data
Figure 3 - The Confirmation screen showing the Resources Specification JSON and the Connection is not ready yet message that appears until the connection to your Minikube is established.


In the Confirmation screen shown in Figure 3, an example Resources Specification JSON string appears with the names of the storageClasses, nodePools and instanceType, zone and capacity value.

 {
    "storageClasses": [
      {
        "name": "standard"
      }
    ],
    "nodePools": [
      {
        "for": [
          "CLICKHOUSE",
          "ZOOKEEPER",
          "SYSTEM"
        ],
        "instanceType" : "minikube-node",
        "zone"         : "minikube-zone-a",
        "capacity"     : 10
      } 
    ]
}

Node Registration

The following step registers your Minikube label so that the ACM can find your ClickHouse Kubernetes server that Altinity.Cloud just provisioned for you. Refer to the Resource Specification JSON for the values for instanceType minikube-node and zone name minikube-zone-a where they are set.

Run the following string from your Kubernetes host terminal.

kubectl --context=minikube label nodes minikube \
  node.kubernetes.io/instance-type=minikube-node \
  topology.kubernetes.io/zone=minikube-zone-a

Optional Watch Commands

To monitor in real time the progress of a provisioning installation, run the watch commands on the two altinity-cloud prefixed namespaces.

Running Watch command 1 of 2 To monitor the process of the provisioning, use the watch or k9s command utility to monitor altinity-cloud-system. The display updates every 2 seconds.

watch kubectl -n altinity-cloud-system get all

Response The result appears similar to the following display:

Every 2.0s: kubectl -n altinity-cloud-system get all                     john.doe-yourcomputer.local: Sun Mar 19 23:03:18 2023

NAME                                      READY   STATUS    RESTARTS   AGE
pod/cloud-connect-d6ff8499f-bkc5k         1/1     Running   0          10h
pod/crtd-665fd5cb85-wqkkk                 1/1     Running   0          10h
pod/edge-proxy-66d44f7465-t9446           2/2     Running   0          10h
pod/grafana-5b466574d-vvt9p               1/1     Running   0          10h
pod/kube-state-metrics-58d86c747c-7hj79   1/1     Running   0          10h
pod/node-exporter-762b5                   1/1     Running   0          10h
pod/prometheus-0                          1/1     Running   0          10h
pod/statuscheck-f7c9b4d98-2jlt6           1/1     Running   0          10h

NAME                          TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                       AGE
service/edge-proxy            ClusterIP      10.109.2.17      <none>        443/TCP,8443/TCP,9440/TCP                     10h
service/edge-proxy-lb         LoadBalancer   10.100.216.192   <pending>     443:31873/TCP,8443:32612/TCP,9440:31596/TCP   10h
service/grafana               ClusterIP      10.108.24.91     <none>        3000/TCP                                      10h
service/prometheus            ClusterIP      10.102.103.141   <none>        9090/TCP                                      10h
service/prometheus-headless   ClusterIP      None             <none>        9090/TCP                                      10h
service/statuscheck           ClusterIP      10.101.224.247   <none>        80/TCP                                        10h

NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/node-exporter   1         1         1       1            1           <none>          10h

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/cloud-connect        1/1     1            1           10h
deployment.apps/crtd                 1/1     1            1           10h
deployment.apps/edge-proxy           1/1     1            1           10h
deployment.apps/grafana              1/1     1            1           10h
deployment.apps/kube-state-metrics   1/1     1            1           10h
deployment.apps/statuscheck          1/1     1            1           10h

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/cloud-connect-d6ff8499f         1         1         1       10h
replicaset.apps/crtd-665fd5cb85                 1         1         1       10h
replicaset.apps/edge-proxy-66d44f7465           1         1         1       10h
replicaset.apps/grafana-5b466574d               1         1         1       10h
replicaset.apps/grafana-6478f89b7c              0         0         0       10h
replicaset.apps/kube-state-metrics-58d86c747c   1         1         1       10h
replicaset.apps/statuscheck-f7c9b4d98           1         1         1       10h

NAME                          READY   AGE
statefulset.apps/prometheus   1/1     10h

Running Watch command 2 of 2 Open a second terminal window to monitor altinity-cloud-managed-clickhouse.

watch kubectl -n altinity-cloud-system get all

Response The result appears similar to the following display:

Every 2.0s: kubectl -n altinity-cloud-managed-clickhouse get all        john.doe-yourcomputer.local: Mon Mar 20 00:14:44 2023

NAME                                            READY   STATUS    RESTARTS   AGE
pod/chi-test-anywhere-6-test-anywhere-6-0-0-0   2/2     Running   0          11h
pod/clickhouse-operator-996785fc-rgfvl          2/2     Running   0          11h
pod/zookeeper-5244-0                            1/1     Running   0          11h

NAME                                              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/chi-test-anywhere-6-test-anywhere-6-0-0   ClusterIP   10.98.202.85    <none>        8123/TCP,9000/TCP,9009/TCP   11h
service/clickhouse-operator-metrics               ClusterIP   10.109.90.202   <none>        8888/TCP                     11h
service/clickhouse-test-anywhere-6                ClusterIP   10.100.48.57    <none>        8443/TCP,9440/TCP            11h
service/zookeeper-5244                            ClusterIP   10.101.71.82    <none>        2181/TCP,7000/TCP            11h
service/zookeepers-5244                           ClusterIP   None            <none>        2888/TCP,3888/TCP            11h

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/clickhouse-operator   1/1     1            1           11h

NAME                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/clickhouse-operator-996785fc   1         1         1       11h

NAME                                                       READY   AGE
statefulset.apps/chi-test-anywhere-6-test-anywhere-6-0-0   1/1     11h
statefulset.apps/zookeeper-5244                            1/1     11h

Optional K9S Commands

Similar to Watch, but in color and in a smaller interactive window, K9S is a free utility that lets you monitor in real time the progress of a provisioning installation.

To open monitoring windows for each altinity-cloud namespaces, open a new terminal instance and run the k9s command:

k9s -n altinity-cloud-system
k9s -n altinity-cloud-managed-clickhouse

KS9 monitoring terminal windows

Data
Figure 4 - The k9s monitoring windows for the two namespaces altinity-cloud-system and altinity-cloud-managed-clickhouse listing each node name, IP address, and the run status.

Environment Dashboard

When provisioning is complete and the connection is established, the ACM displays the dashboard page showing the green connected icon. Since there is no cluster yet, the dashboard shows zeros for the number of Nodes and Clusters.

Data
Figure 5 - The Environments dashboard screen shows you a snapshot of your Minikube server configuration, including the green connected status.

Listing Namespaces

To verify the presence of the new namespaces on your Minikube server, open a third terminal window and list the namespaces to show the two altinity-cloud additions:

kubectl get ns

Response Note the two new altinity-cloud namespaces at the top:

NAME                                STATUS   AGE
altinity-cloud-managed-clickhouse   Active   8h
altinity-cloud-system               Active   8h
default                             Active   16d
kube-node-lease                     Active   16d
kube-public                         Active   16d
kube-system                         Active   16d

Creating a ClickHouse Cluster

These instructions run through the use of the Altinity.Cloud Manager (ACM) Clusters > LAUNCH CLUSTER wizard to create a ClickHouse cluster running in a Minikube Kubernetes environment. The Cluster dashboard in Figure 6 shows the finished result.

Data
Figure 6 - The Clusters dashboard screen showing your new cluster on your Minikube server created by the Altinity Cloud Manager.


To create a new ClickHouse Cluster using the Launch Cluster wizard:

NOTE: The Cluster Launch Wizard lets you navigate back and forth between the previously filled-in screens by selecting the title links on the left, or using the BACK and NEXT buttons.

  1. In the Altinity Cloud Manager, select Clusters.
  2. Select the LAUNCH CLUSTER blue button.
  3. In step 1 ClickHouse Setup screen, fill in the following, and select the blue NEXT button:
    • Name = test-anywhere (15-character limit, lower-case letters only)
    • ClickHouse Version = ALTINITY BUILDS: 22.8.13 Stable Build
    • ClickHouse User Name = admin
    • ClickHouse User Password = admin-password
  4. In step 2 Resources Configuration screen, fill in the following then select the NEXT button:
    • Node Type = minikube-node (CPU xnull, RAM pending)
    • Node Storage = 10 GB
    • Number of Volumes = 1
    • Volume Type = standard
    • Number of Shards = 1
  5. In step 3 High Availability Configuration screen, fill in the following then select NEXT:
    • Number of Replicas = 1
    • Zookeeper Configuration = Dedicated
    • Zookeeper Node Type = default
    • Enable Backups = OFF (unchecked)
    • Number of Backups to keep = 0 (leave blank)
  6. In step 4 Connection Configuration screen, fill in the following then select NEXT:
    • Endpoint = test-anywhere5.your-environment-name-a123.altinity.cloud
    • Use TLS = Checked
    • Load Balancer Type = Altinity Edge Ingress
    • Protocols: Binary Protocol (port:9440) - is checked ON
    • Protocols: HTTP Protocol (port:8443) - is checked ON
    • Datadog integration = disabled
    • IP restrictions = OFF (Enabled is unchecked)
  7. In step 5 Uptime Schedule screen, select ALWAYS ON then NEXT:
  8. In the final screen step 6 Review & Launch, select the green LAUNCH button.

Your new ClickHouse Cluster will start building inside your Minikube. When the cluster is finished building and running, the cluster dashboard appears, similar to the screenshot shown in Figure 6. Beside your cluster name, two green status boxes nodes online, and checks passed appear.

Creating a Database and Running Queries

In this section, you will create tables on your cluster using the ACM and run queries from both the ACM and then from your local terminal.

Testing your database on ACM

To create a new database on your Altinity.Cloud Anywhere cluster from the ACM:

  1. Login to the ACM and select Clusters, then select EXPLORE on your cluster.
  2. In the Query text box, enter the following create table SQL query:
CREATE TABLE IF NOT EXISTS events_local ON CLUSTER '{cluster}' (
    event_date  Date,
    event_type  Int32,
    article_id  Int32,
    title       String
) ENGINE = ReplicatedMergeTree('/clickhouse/{cluster}/tables/{shard}/{database}/{table}', '{replica}')
    PARTITION BY toYYYYMM(event_date)
    ORDER BY (event_type, article_id);
  1. Create a second table:
CREATE TABLE events ON CLUSTER '{cluster}' AS events_local
   ENGINE = Distributed('{cluster}', default, events_local, rand())
  1. Add some data with this query:
INSERT INTO events VALUES(today(), 1, 13, 'Example');
  1. List the data you just entered:
SELECT * FROM events;

# Response
test-anywhere-6.johndoetest-a123.altinity.cloud:8443 (query time: 0.196s)
┌─event_date─┬─event_type─┬─article_id─┬─title───┐
│ 2023-03-24 │          113 │ Example │
└────────────┴────────────┴────────────┴─────────┘
  1. Show all the tables:
show tables

# Response
test-anywhere-6.johndoetest-a123.altinity.cloud:8443 (query time: 0.275s)
┌─name─────────┐
│ events       │
│ events_local │
└──────────────┘

Testing ClickHouse on your local terminal

This section shows you how to use your local Minikube computer terminal to log into your Clickhouse Cluster that ACM created. NOTE: With Minikube, you cannot use your cluster Connection Details strings to directly run clickhouse-client commands, you must first log into the ClickHouse pod as described in the following steps.

  1. Find your pod name:
kubectl -n altinity-cloud-managed-clickhouse get all

# Response
NAME                                               READY   STATUS    RESTARTS        AGE
pod/chi-test-anywhere-6-johndoe-anywhere-6-0-0-0   2/2     Running   8 (3h25m ago)   2d17h
  1. On your Minikube computer terminal, log into that pod using the name you got from step 1:
kubectl -n altinity-cloud-managed-clickhouse exec -it pod/chi-test-anywhere-6-johndoe-anywhere-6-0-0-0 -- bash

# Response
Defaulted container "clickhouse-pod" out of: clickhouse-pod, clickhouse-backup
clickhouse@chi-test-anywhere-6-johndoe-anywhere-6-0-0-0:/$ 
  1. Log into your ClickHouse database using the clickhouse-client command to get the :) happy face prompt:
clickhouse@chi-test-anywhere-6-johndoe-anywhere-6-0-0-0:/$ 
clickhouse@chi-test-anywhere-6-johndoe-anywhere-6-0-0-0:/$ clickhouse-client

# Response
<jemalloc>: MADV_DONTNEED does not work (memset will be used instead)
<jemalloc>: (This is the expected behavior if you are running under QEMU)
ClickHouse client version 22.8.13.21.altinitystable (altinity build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 22.8.13 revision 54460.

test-anywhere-6 :) 
  1. Run a show tables SQL command:
test-anywhere-6 :) show tables

# Response

SHOW TABLES

Query id: da01133d-0130-4b98-9090-4ebc6fa4b568

┌─name─────────┐
│ events       │
│ events_local │
└──────────────┘

2 rows in set. Elapsed: 0.013 sec.  
  1. Run the following SQL query to show data in the events table:
test-anywhere-6 :) SELECT * FROM events;

# Response

SELECT * 
FROM events

Query id: 00fef876-e9b0-44b1-b768-9e662eda0483

┌─event_date─┬─event_type─┬─article_id─┬─title───┐
│ 2023-03-24 │          113 │ Example │
└────────────┴────────────┴────────────┴─────────┘

1 row in set. Elapsed: 0.023 sec. 

test-anywhere-6 :) 

Exiting from ClickHouse client and your pod

  1. To leave the ClickHouse client, enter the exit command.
  2. To leave the pod and return to the Linux prompt enter the exit command again.
  3. Verify you are at the command prompt by entering a linux command such as pwd (print working directory) to see what directory you are currently in.
aws-anyw-test :) exit
Bye.

clickhouse@chi-was-anyw-test-was-anyw-test-0-0-0:/$ exit
exit
ubuntu@ip-172-31-16-238:~$ 

ubuntu@ip-172-31-16-238:~$ pwd
/home/ubuntu

Review the following database creation and query instructions:



Appendix

This section provides a few commonly used Minikube maintenance operations.

Rescaling a cluster

Use the Altinity Cloud Manager Actions 》Rescale to change the CPU, Node Storage, Volumes, and Number of Shards and Replicas.

  1. From the list of Clusters, select a running cluster.
  2. Select the menu ACTIONS > Rescale item.
  3. In the Rescale Cluster window, adjust the following settings as needed:
  • Desired Cluster Size > Number of Shards
  • Desired Cluster Size > Number of Replicas
  • Desired Node Size > Node Type
  • Desired Node Storage (GB) > (integer: example 50)
  • Number of Volumes > (integer: example 2)
  1. Select OK, then CONFIRM at the Rescale Confirmation window.
  2. Confirm that the new values appear in your cluster dashboard panel.

Note that cluster Node Storage size may not be decreased, only increased by at least 10%.

Resetting Altinity.Cloud Anywhere

Reset your Altinity.Cloud Anywhere cluster from the ACM and your Minikube installation to create a new Altinity.Cloud Anywhere connection.

To use the Reset Anywhere function.

  1. In the ACM, select Environments from the left-hand navigation pane.
  2. From the environment menu located beside your login name at the top right of the ACM, select your environment name.
  3. In the ACTION menu, select Reset Anywhere.

The result is that you will see the Anywhere Connection Setup screen and provisioning wizard that shows you the connection string to copy and paste to deploy a new Anywhere environment.

Deleting a cluster

Deletion steps involve the ACM and the server hosting your cluster. If necessary, first Reset Anywhere.

From the Altinity Cloud Manager:

  1. In the Clusters section, select from your cluster menu ACTIONS > Destroy.
  2. At the Delete Cluster confirmation dialog box, type in the name of your cluster (example-cluster) and select OK.
  3. From the Environments section, select your Environment Name link.
  4. Select the menu ACTIONS > Reset Anywhere.

To list the ClickHouse namespaces, delete your Kubernetes managed environments from your server and run the following commands: (NOTE: Make sure you have run the minikube start command first. )

# List the namespaces
kubectl get ns

# Delete the following in this order
kubectl -n altinity-cloud-managed-clickhouse delete chi --all
kubectl delete ns altinity-cloud-managed-clickhouse
kubectl delete ns altinity-cloud-system

Deleting a pod

Deleting a pod may be necessary if it is not starting up.

Problem

One of the pods won’t start. (Example: see line 3 edge-proxy-66d44f7465-lxjjn)

    ┌──────────────── Pods(altinity-cloud-system)[8] ──────────────────────────┐
    │ NAME↑                                PF READY RESTARTS STATUS            │
 1  │ cloud-connect-d6ff8499f-bkc5k        ●  1/1       3    Running           │
 2  │ crtd-665fd5cb85-wqkkk                ●  1/1       3    Running           │
 3  │ edge-proxy-66d44f7465-lxjjn          ●  1/2       7    CrashLoopBackOff  │
 4  │ grafana-5b466574d-4scjc              ●  1/1       1    Running           │
 5  │ kube-state-metrics-58d86c747c-7hj79  ●  1/1       6    Running           │
 6  │ node-exporter-762b5                  ●  1/1       3    Running           │
 7  │ prometheus-0                         ●  1/1       3    Running           │
 8  │ statuscheck-f7c9b4d98-2jlt6          ●  1/1       3    Running           │
    └──────────────────────────────────────────────────────────────────────────┘

Terminal listing 1 - The pod in Line 3 edge-proxy-66d44f7465-lxjjn won’t start.


Solution

Delete the pod using the kubectl delete pod command and it will regenerate.
(Example: see line 3 edge-proxy-66d44f7465-lxjjn)

kubectl -n altinity-cloud-system delete pod edge-proxy-66d44f7465-lxjjn

Stopping minikube

To stop the Minikube service, run the following command:

minikube stop
✋  Stopping node "minikube"  ...
🛑  1 node stopped.

4.4 -

Overview of the Rescale operation

This page shows how the Altinity Cloud Manager with an Altinity.Anywhere installation to remotely rescale a customer’s on-prem cluster.

The Ubuntu host the Kubernetes installation is installed on shows the various commmands use to verify the changes made from the ACM.

Altinity Anywhere Overview
Figure 1 - Selecting Actions > Rescale from the cluster to modify.


Altinity Anywhere Overview
Figure 2 - Changing the number of Shards from 1 to 2.


Altinity Anywhere Overview
Figure 3 - Rescale confirmation.


Altinity Anywhere Overview
Figure 4 - Nodes in the process of rescaling.

Rescaling a cluster on the ACM

Select a cluster, then use the Actions > Rescale menu to bring up the Rescale Cluster window, then in the Desired Cluster Size, change the Number of Shards from 1 to 2 then press OK, then CONFIRM.

The nodes online pill box will show grey, 2/4 nodes online, then after several minutes, turn green showing 4/4 nodes online. If you do not see the grey 2/4 nodes online, and the nodes online is green and shows 2/2 nodes online, try the rescale operation again.

Ubuntu Kubernetes Commands

Ubuntu command kubectl -n altinity-cloud-managed-clickhouse... showing the Altinity clusters before the rescale operation.

Altinity Anywhere Overview
Figure 5 - Kubernetes command kubectl -n <Altinity cluster name> running on-prem, that is managed from also by the ACM.


Altinity Anywhere Overview
Figure 6 - The newly added nodes …-demo-1-0-0 after the rescale operation are now listed, showing Pending.

Ubuntu command kubectl get nodes before the rescale operation.

Altinity Anywhere Overview
Figure 7 - Kubernetes command kubectl get nodes shows all the nodes on the Altinity ClickHouse cluster.

Altinity Anywhere Overview
Figure 8 - The pending node is added as 192.168.149.238 and is spinning up.


The newly spun up shard in cluster-x now reads 4/4 nodes online.

Altinity Clusters ACM
Figure 9 - The Altinity Cloud Manager showing the remotely managed cluster-y with 4/4 nodes online.

4.5 -

20 March 2023 · Read time 1 min


Before installing Altinity.Cloud Anywhere into your environment, verify that the following requirements are met.

Security Requirements

  • Have a current Altinity.Cloud account.
  • An Altinity.Cloud API token. For more details, see Account Settings.

Software Requirements

The following are instructions that can be used to install some of the prerequisites.

kubectl Installation for Deb

The following instructions are based on Install and Set Up kubectl on Linux

  1. Download the kubectl binary:

    curl -LO 'https://dl.k8s.io/release/v1.22.0/bin/linux/amd64/kubectl'
    
  2. Verify the SHA-256 hash:

    curl -LO "https://dl.k8s.io/v1.22.0/bin/linux/amd64/kubectl.sha256"
    
    echo "$(<kubectl.sha256) kubectl" | sha256sum --check
    
  3. Install kubectl into the /usr/local/bin directory (this assumes that your PATH includes use/local/bin):

    sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
    
  4. Verify the installation and the version:

    kubectl version
    

5 - Administration

Administration functions of Altinity.Anywhere.

Altinity.Cloud Anywhere Administraiton

5.1 - Altinity.Cloud connect

Setting up Altinity.Cloud connect

What is Altinity.Cloud connect?

Altinity.Cloud connect (altinitycloud-connect) is a tunneling daemon for Altinity.Cloud. It enables management of ClickHouse clusters through Altinity.Cloud Anywhere.

Required permissions

altinitycloud-connect requires following permissions:

Open outbound ports:

  • 443 tcp/udp (egress; stateful)

Kubernetes permissions:

  • cluster-admin for initial provisioning only, it can be revoked afterwards
  • full access to ‘altinity-cloud-system’ and ‘altinity-cloud-managed-clickhouse’ namespaces and a few optional read-only cluster-level permissions (for observability)

Install and Connect to Altinity.Cloud

See the steps in the Quickstart Connect to Altinity.Cloud procedure.

Batch operation of altinitycloud-connect

altinitycloud-connect login produces cloud-connect.pem used to connect to Altinity.Cloud Anywhere control plane (--token is short-lived while cloud-connect.pem does not expire until revoked). If you need to reconnect the environment in unattended/batch mode (i.e. without requesting the token), you can do so via

altinitycloud-connect kubernetes -i /path/to/cloud-connect.pem | kubectl apply -f -

Disconnecting your environment from Altinity.Cloud

  1. Locate your environment in the Environment tab in your Altinity.Cloud account.

  2. Select ACTIONS->Delete.

  3. Toggle the Delete Clusters switch only if you want to delete managed clusters.

  4. Press OK to complete.

After this is complete Altinity.Cloud will no longer be able to see or connect to your Kubernetes environment via the connector.

Cleaning up managed environments in Kubernetes

To clean up managed ClickHouse installations and namespaces in a disconnected Kubernetes cluster, issue the following commands in the exact order shown below.

kubectl -n altinity-cloud-managed-clickhouse delete chi --all
kubectl delete ns altinity-cloud-managed-clickhouse
kubectl delete ns altinity-cloud-system

If you delete the namespaces before deleting the ClickHouse installations (chi) the operation will hang due to missing finalizers on chi resources. Should this occur, issue kubectl edit commands on each ClickHouse installation and remove the finalizer manually from the resource specification. Here is an example.

 kubectl -n altinity-cloud-managed-clickhouse edit clickhouseinstallations.clickhouse.altinity.com/test2

5.2 - Setting up logging

Setting up Altinity.Cloud Anywhere logging

20 March 2023 · Read time 2 min

Configuring logging

In order for Altinity.Cloud Anywhere to gather/store/query logs, you need to configure access to S3 or GCS bucket. Cloud-specific instructions provided below.

EKS (AWS)

The recommended way is to use IRSA.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: log-storage
  namespace: altinity-cloud-system
  annotations:
    eks.amazonaws.com/role-arn: "arn:aws:iam::<aws_account_id>:role/<role_arn>"

Alternatively, you can use custom Instance Profile or explicit credentials (shown below).

# create bucket
aws s3api create-bucket --bucket REPLACE_WITH_BUCKET_NAME --region REPLACE_WITH_AWS_REGION

# create user with access to the bucket
aws iam create-user --user-name REPLACE_WITH_USER_NAME
aws iam put-user-policy \
    --user-name REPLACE_WITH_USER_NAME \
    --policy-name REPLACE_WITH_POLICY_NAME \
    --policy-document \
'{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:ListBucket",
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::REPLACE_WITH_BUCKET_NAME",
                "arn:aws:s3:::REPLACE_WITH_BUCKET_NAME/*"
            ],
            "Effect": "Allow"
        }
    ]
}'

# generate access key
aws iam create-access-key --user-name REPLACE_WITH_USER_NAME |
  jq -r '"AWS_ACCESS_KEY_ID="+(.AccessKey.AccessKeyId)+"\nAWS_SECRET_ACCESS_KEY="+(.AccessKey.SecretAccessKey)+"\n"' > credentials.env

# create altinity-cloud-system/log-storage-aws secret containing AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY
kubectl create secret -n altinity-cloud-system generic log-storage-aws \
  --from-env-file=credentials.env

rm -i credentials.env

Please send bucket name back to Altinity in order to finish configuration.

GKE (GCP)

The recommended way is to use Workload Identity.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: log-storage
  namespace: altinity-cloud-system
  annotations:
    iam.gke.io/gcp-service-account: "<gcp_sa_name>@<project_id>.iam.gserviceaccount.com"

Alternatively, you can use GCP service account for instance or explicit credentials (shown below).

# create bucket
gsutil mb gs://REPLACE_WITH_BUCKET_NAME

# create GCP SA with access to the bucket
gcloud iam service-accounts create REPLACE_WITH_GCP_SA_NAME \
  --project=REPLACE_WITH_PROJECT_ID \
  --display-name "REPLACE_WITH_DISPLAY_NAME"
gsutil iam ch \
  serviceAccount:REPLACE_WITH_GCP_SA_NAME@REPLACE_WITH_PROJECT_ID.iam.gserviceaccount.com:roles/storage.admin \
  gs://REPLACE_WITH_BUCKET_NAME

# generate GCP SA key
gcloud iam service-accounts keys create credentials.json \
--iam-account=REPLACE_WITH_GCP_SA_NAME@REPLACE_WITH_PROJECT_ID.iam.gserviceaccount.com \
--project=REPLACE_WITH_PROJECT_ID

# create altinity-cloud-system/log-storage-gcp secret containing credentials.json
kubectl create secret -n altinity-cloud-system generic log-storage-gcp \
  --from-file=credentials.json

rm -i credentials.json

Please send bucket name back to Altinity in order to finish configuration.