Altinity.Cloud Anywhere is a zero-maintenance, open source-based SaaS for ClickHouse that gives you control of your data, letting you chart your own path and giving you choice as to working with vendors or running your infrastructure yourself.
Your data. Your control. Our tools.
1 - Altinity.Cloud Anywhere 101
What is Altinity.Cloud Anywhere?
17 May 2023 · Read time 4 min
Introduction and Benefits
Altinity.Cloud Anywhere provides the convenient cloud management of Altinity.Cloud but allows users to keep data within their own cloud VPCs and private data centers, and run managed ClickHouse in their own Kubernetes clusters. We call these clusters Altinity.Cloud Anywhere environments.
Altinity.Cloud Anywhere offers several important benefits for users.
Compliance - Retain full control of data (including backups) as well as the operating environment and impose your policies for security and privacy.
Cost - Optimize infrastructure costs by running in your accounts.
Location - Place ClickHouse clusters close to data sources and applications.
Vendor Unlocking - Disconnect at any time and continue to operate ClickHouse using open-source components.
The rest of this document explains concepts that help users understand Altinity.Cloud Anywhere and maximize benefits.
The Altinity.Cloud Manager UI manages Altinity.Cloud Anywhere environments are just like fully hosted Altinity.Cloud environments. Users can control multiple environments from the same Altinity.Cloud account and can mix/match environment types. ClickHouse management operations are identical in all environments.
Service Architecture
The Altinity.Cloud service architecture consists of a shared management plane that serves as a single point of management for all tenants and a data plane that consists of isolated environments for each tenant. The following diagram shows the service architecture and data plane relationships.
Figure 1 - Service Architecture.
Each environment is a dedicated Kubernetes cluster. In the case of Altinity.Cloud environments, Kubernetes clusters run on Altinity’s cloud accounts and are completely hidden from users. In the Altinity.Cloud Anywhere case, Kubernetes clusters run in the user’s cloud account or data center.
For example, the user may run an EKS cluster within a VPC belonging to the user’s AWS cloud account.
Altinity.Cloud Anywhere environments can also use on-prem Kubernetes clusters. They can even use development versions of Kubernetes running on a user’s PC or laptop.
Open Source Analytic Stack
Altinity.Cloud Anywhere uses open-source software for the analytic stack and selected management services–the Altinity Operator for ClickHouse, Prometheus, and Grafana. The following diagram shows how the principal components map to resources in AWS. (GCP is essentially identical.) Open-source components are marked in orange.
Figure 2 - Management and observability.
Users can terminate the service, and disconnect the Altinity.Cloud Anywhere environment from Altinity.Cloud, and run ClickHouse services themselves. There is no migration, since all data, software, and support services are already in the user Kubernetes cluster.
Altinity.Cloud Anywhere Connectivity Model
Altinity.Cloud Anywhere environments use the Altinity Connector to establish a management connection from the user Kubernetes cluster to Altinity.Cloud. The Altinity Connector establishes an outbound HTTPS connection to a management endpoint secured by certificates. This allows management commands and monitoring data to move securely between locations.
Users connect an Altinity.Cloud Anywhere environment to Altinity.Cloud in three simple steps.
Download the Altinity Connector executable program (altinitycloud-connect).
Run and register Altinity Connector with Altinity.Cloud Manager.
If Altinity Connector is installed on a separate VM, it may run provisioning of the Kubernetes cluster (EKS, GKE, AKS). This process deploys a new instance of Altinity Connector into the provisioned Kubernetes cluster as well.
When Altinity Connector is installed directly in Kubernetes, it runs the provisioning of Kubernetes resources.
Complete registration in the Altinity.Cloud Manager.
Altinity.Cloud Anywhere environments run all services in two namespaces.
The altinity-cloud-system namespace contains system services including the Altinity Connector.
The altinity-cloud-managed-clickhouse namespace contains ClickHouse and ZooKeeper. Users can run services in other namespaces provided they do not make changes to the Altinity-managed namespaces.
See the Quickstart page for steps to register an Altinity.Cloud Anywhere environment.
Kubernetes Cluster Preparation for Use
Kubernetes clusters must meet a small number of requirements to serve as an Altinity.Cloud Anywhere environment for production use.
Configure storage classes that can allocate block storage on-demand, for example using the AWS EBS CSI driver.
Enable auto-provisioning, e.g., node groups or Karpenter. This allows Altinity.Cloud to expand or contract clusters as well as rescale server pods efficiently.
Kubernetes pods must be able to connect to S3-compatible object storage or GCS (Google Cloud Storage). Object storage is used for backups.
These requirements can be relaxed for non-production environments, such as Minikube. Check the Kubernetes Requirements page for more recommendations on specific Kubernetes distributions.
Shared Administration between Altinity.Cloud and User
In Altinity.Cloud Anywhere environments the responsibility for administration is shared between Altinity and users. The following table shows major system components.
Altinity is developing a new model called Altinity.Cloud Anywhere Plus. It will shift responsibility for Kubernetes and VPC management to Altinity.
Contact Altinity Support for more information on this model.
2 - Altinity.Cloud Anywhere Quickstart
How to use Altinity.Cloud Anywhere to connect to your on-prem or 3rd-party ClickHouse host environment.
Overview - Quickstart
This tutorial explains how to use Altinity.Cloud Anywhere to deploy ClickHouse clusters using your choice of a third-party Kubernetes cloud provider, or using your own hardware or private company cloud. The Altinity.Cloud Manager (ACM) is used to manage your ClickHouse clusters.
The end result of the tutorial on this page is shown in Figure 5.
More Information
If you encounter difficulties with any part of the tutorial, check the Troubleshooting section.
Contact Altinity support for additional help if the troubleshooting advice does not resolve the problem.
For non-production use, a Minikube-based tutorial is provided to show how to use an Altinity.Cloud Anywhere deployment on a home computer. This is a 20-minute read that includes creating a new database and adding tables and data using the ACM.
Figure 1 - The Altinity.Cloud Anywhere Free Trial signup page that shows Google GKS selected for the Kubernetes type.
Submitting the Free Trial form
Fill in the form and select your Kubernetes option (Example: Google GKS).
NOTE: Public email domains such as Gmail or Hotmail are not allowed; you must use a company domain.
From the first Altinity Email you receive after clicking SUBMIT, follow the instructions in the signup process to validate your email. This will notify Altinity technical support to provision your new account.
The next email you will receive after Altinity completes your account setup. It contains a link to log in to Altinity.Cloud, where you will create a password to log in to the Altinity Cloud Manager (ACM).
Now you are ready to connect your Kubernetes cluster.
Connecting Kubernetes
The first time you log in, you will be directed to the environment setup page shown in Figure 3. If you have an existing account or restart the installation, just select the Environments tab on the left side of your screen to reach the setup page.
Figure 2 - Environments > Connection Setup tab in the Altinity.Cloud Manager.
Connection Setup
Highlighted in red in Figure 3 are the steps to complete before you select the PROCEED button.
In the first step labeled Altinity.Cloud connect, download the correct binary for your system.
In step 2 Connect to Altinity.Cloud, copy and paste the connection string to your terminal. Note that there is no output, so the command prompt is immediately ready for the next command.
This step takes several minutes to complete depending on the speed of your host system.
The response displays as follows:
namespace/altinity-cloud-system created
namespace/altinity-cloud-managed-clickhouse created
clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:storage-class-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:cloud-connect unchanged
serviceaccount/cloud-connect created
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:storage-class-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view unchanged
rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
secret/cloud-connect created
deployment.apps/cloud-connect created
Note: In order to display Kubernetes roles and resources before applying run the following command.
altinitycloud-connect kubernetes
Resources Configuration
Once these commands have completed select the PROCEED button. After the connection is made, you will advance to the next Resources Configuration screen.
At the Resources Configuration screen, set the resources used for ClickHouse clusters as follows.
Select your Kubernetes provider using the Cloud Provider radio button
(Example: GCP).
Add Storage Classes names, which are the block storage for your nodes. Use the ADD STORAGE CLASS button to add additional storage classes as needed to allocate block storage for nodes in your environment.
In the Node Pools section, Inspect the node pool list to ensure availability zones and pools you wish to use are listed.
Note that the Used For column must have at least one selection of ClickHouse, Zookeeper, System.
Listed are the Availability Zones that are currently in use. If you see zones that are missing, add them using the ADD NODE POOL.
The ACM Availability Zones UI path is: Environments > clustername > ACTIONS > Edit > Container Options tab.
The following Resources Configuration example shows the red boxes around the settings made for the Google Cloud Platform GKE environment.
Figure 3 - The Resources Configuration setup page for connecting cloudv2-gcp to Altinity.Cloud.
The Cloud Provider is set to GCP.
The Storage Classes uses the ADD STORAGE CLASS button to add the following:
premium-rwo
standard
standard-two
The Node Pools section uses the ADD NODE POOL button to add the Zone and Instance Type, storage Capacity in GB, and the Used For settings as follows:
Zone Instance Type Capacity Used for--------- -------------- -------- ---------------------------------------------------
us-east-b e2-standard-2 10[True] ClickHouse [True] Zookeeper [False] System
us-east-a e2-standard-2 3[True] ClickHouse [True] Zookeeper [False] System
Confirmation of Settings
The Confirmation screen displays a JSON representation of the settings you just made. Review these settings then select FINISH.
Figure 4 - Confirmation page showing the JSON version of the settings.
Connection Completed, Nodes Running
Once the connection is fully set up, the ACM Environments dashboard will display your new environment as shown in Figure 5 (example: cloudv2gpc).
The result shown in Figure 6 is a ClickHouse cluster added to the Clusters dashboard.
Figure 6 - The result: The ACM displays a new ClickHouse cluster (Example cluster name: free-trial-any) deployed by Altinity.Cloud Anywhere.
Troubleshooting
Q-1. Altinity.Cloud Anywhere endpoint not reachable
Problem
The altinitycloud-connect command has a –url option that defaults to host anywhere.altinity.cloud on port 443. If this host is not reachable, the following error message appears.
altinitycloud-connect login --token=<token>
Error: Post "https://anywhere.altinity.cloud/sign":
dial tcp: lookup anywhere.altinity.cloud on 127.0.0.53:53: no such host
Solution
Make sure the name is available in DNS and that the resolved IP address is reachable on port 443 (UDP and TCP), then try again.
Note: if you are using a non-production Altinity.Cloud environment you must specify the correct URL explicitly. Contact Altinity support for help.
Q-2. Insufficient Kubernetes privileges
Problem
Your Kubernetes account has insufficient permissions.
Solution
Set the following permissions for your Kubernetes account:
cluster-admin for initial provisioning only (it can be revoked afterwards)
Give full access to altinity-cloud-system and altinity-cloud-managed-clickhouse namespaces
A few optional read-only cluster-level permissions (for observability only)
Q-3. Help! I messed up the resource configuration
Problem
The resource configuration settings are not correct.
Solution
From the Environment tab, in the Environment Name column, select the link to your environment.
Terminal listing 1 - The pod in Line 3 edge-proxy-66d44f7465-lxjjn won’t start.
Solution
Delete the pod using the kubectl delete pod command and it will regenerate. (Example: see line 3 edge-proxy-66d44f7465-lxjjn)
kubectl -n altinity-cloud-system delete pod edge-proxy-66d44f7465-lxjjn
3 - Kubernetes Requirements
Kubernetes Requirements.
Altinity.Cloud Anywhere operates inside user’s Kubernetes environment. Kubernetes can be provisioned by Altinity or provided by a user as described in the following section:
Following Kubernetes capabilities are preferrable in order to get the most from Altinity.Cloud features:
Storage class should allow volume expansion
Multiple zones are preferable for HA.
Autoscaling is preferable for easier vertical scaling.
See cloud specific requirements in the following sections:
3.1 - Recommendations for EKS (AWS)
Altinity.Cloud Anywhere recommendations for EKS (AWS)
20 March 2023 · Read time 1 min
We recommend setting up karpenter or cluster-autoscaler
to launch instances in at least 3 Availability Zones.
If you plan on sharing Kubernetes cluster with other workloads, it’s
recommended you label Kubernetes Nodes intended for Altinity.Cloud Anywhere
with altinity.cloud/use=anywhere & taint with dedicated=anywhere:NoSchedule.
Instance Types
for Zookeeper infrastructure nodes
t3.large or t4g.large*
* t4g instances are AWS Graviton2-based (ARM).
for ClickHouse nodes
ClickHouse works the best in AWS when using nodes from those instance families:
m5
m6i
m6g*
* m6g instances are AWS Graviton2-based (ARM).
Instance sizes from large to 8xlarge are typical.
Storage Classes
gp2
gp2-encrypted
gp3*
gp3-encrypted*
* gp3 storage classes require Amazon EBS CSI driver that does not come pre-installed.
From the table, filter on the Quota (example: persistent Disk SSD) and Dimensions (example: specify the region name us-west1) columns, select EDIT QUOTAS then change the Limit value (example: change 500 GB to 600 GB).
Property name filter example
Persistent Disk SSD (GB)
N2 CPUs
us-west1
Altinity recommends setting up each node pool except the default one in at least 3 zones.
If you plan on sharing Kubernetes cluster with other workloads, it’s
recommended you label Kubernetes Nodes intended for Altinity.Cloud Anywhere
with altinity.cloud/use=anywhere & taint with dedicated=anywhere:NoSchedule.
for Zookeeper and infrastructure nodes
e2-standard-2
for ClickHouse nodes
It’s recommended to taint node pools below with dedicated=clickhouse:NoSchedule (in addition to altinity.cloud/use=anywhere).
n2d-standard-2
n2d-standard-4
n2d-standard-8
n2d-standard-16
n2d-standard-32
If GCP is out of n2d-standard-* instances in the region of your choice, we recommend
substituting them with n2-standard-*.
Storage Classes
standard-rwo
premium-rwo
GKE comes pre-configured with both.
4 - Kubernetes Installation
How to install Altinity.Cloud Anywhere on a Google Cloud Kubernetes environment (GKE).
End-to-end instructions that show you how to install Kubernetes clusters for Altinity.Cloud Anywhere on Amazon (EKS), Google (GKS) or Minikube running on Docker. This include instructions on how to use the Altinity Cloud Manager to create a ClickHouse cluster in your Altinity.Cloud Anywhere Kubernetes installation.
4.1 - AWS Remote Provisioning
Altinity.Cloud Anywhere operates inside user’s Kubernetes environment. Kubernetes can be provided by a user (see “Kubernetes Installation” section), or provisioned by Altinity.
Kubernetes can be provided by a user (see “Kubernetes Installation” section), or provisioned by Altinity.
Altinity technical support can remotely provision AWS EKS clusters with an Altinity.Cloud Anywhere environment on your Amazon account.
Instructions on this page describe how to configure your EKS clusters to provide permission to Altinity to provision ClickHouse to your Amazon EKS Kubernetes environment. Shown in Figure 1 is a high level view that shows the Altinity.Cloud Kubernetes infrastructure.
Figure 1 - Altinity.Cloud Kubernetes architecture, using Altinity Cloud Manager.
Summary of the Bootstrap Process
This section summarizes the bootstrap process so that you can use Altinity.Cloud to deploy a ClickHouse cluster to your AWS EKS environment.
Provision an AWS EKS cluster using EC2 instance running with a user account.
The EC2 instance is required in order to deploy altinitycloud-connect, which will establish an outbound connection to Altinity.Cloud and start the EKS provisioning process.
The EC2 instance can be set up in two ways:
Automatically by using the AWS Cloud Formation Template to automate the process.
Manually set up by a user following Altinity documentation.
Follow this document to complete the provisioning process.
In the Altinity Cloud Manager, complete the configuration of EKS resources.
Automated Provisioning of the EKS using EC2 instance created from the AWS Cloud Formation Template
An Amazon AWS EC2 instance is required to deploy altinitycloud-connect, which will establish an outbound connection to Altinity.Cloud and start the EKS provisioning process.
Get the connection token from Altinity Cloud Manager connection wizard.
Figure 2 - AWS CloudFormation Stack.
Go to the URL for Create stack Cloud Formation Stack as shown in Figure 2:
NOTE: The URL will be different for other regions.
Login to your AWS account then navigate to:
Choose Upload a template file and select the Altinity Cloud Formation Template YAML file as shown in Figure 2.
Fill missing fields on Specify Stack Details page:
Set ‘Stack Name’ to:
NOTE: (replace $USER and the $ENV_NAME as needed)
altinitycloud-connect-$USER-$ENV_NAME
Set ‘Subnets’ where altinitycloud-connect EC2 instance(s) should be launched
(Example: subnet-17c1674a, subnet-2d5c8855, subnet-e0d425aa)
Set the ‘Token presented by https://acm.altinity.cloud’ with a token value from Step 2.
Important: At the last step of the wizard, checkmark the notice:
“I acknowledge that AWS CloudFormation might create IAM resources with custom names”
Complete the wizard and submit the form.
EC2 background processing explained
The EC2 instance is processed in the background as follows:
EC2 instance gets started from the cloud formation template
EC2 gets connected to Altinity.Cloud using altinitycloud-connect
EKS cluster gets provisioned
EKS cluster gets connected to Altinity.Cloud using altinitycloud-connect
In Altinity.Cloud
Select the ‘Proceed’ button in the connection wizard.
NOTE: It is ok to select Proceed more than once, since provisioning takes some time.
Once the EKS cluster is provisioned, wizard will switch to the ‘Resources Configuration’ page.
The following data is required in order to create the VPC and EKS cluster properly:
The CIDR for the Kubernetes VPC (at least /21 recommended, e.g. 10.1.0.0/21)
The Number of Availability Zones (3 are recommended)
Please send this information to your Altinity support representative to start the EKS provisioning process.
When completed, the Altinity Cloud Manager (ACM) will be updated then you can create your ClickHouse clusters.
The remainder of the provisioning process is handled by Altinity.Cloud.
Users may switch back to ACM and wait for connection to be established in order to finish configuration.
In Altinity.Cloud
Select the Proceed button in the connection wizard.
You may repeat this step more than once to see if the connection has completed, since provisioning takes some time.
Once the EKS cluster is provisioned, the connection wizard will switch to the Resources Configuration page.
Finish the configuration of the node pools as described in the Resources Configuration section.
Break Glass Procedure
The “Break Glass” procedure allows Altinity access to EC2 instance with SSH, using AWS SSM in order to troubleshoot altinitycloud-connect that is running on this instance.
Create an AnywhereAdmin IAM role with trust policy set:
How to install Altinity.Cloud Anywhere on Google Cloud GKE (Google Kubernetes Environment).
4.2.1 - Introduction
How to install Altinity.Cloud Anywhere on Google Cloud Platform Google Kubernetes Engine (GKE).
8 May 2023 · Read time 1 min
Overview - Google GKE Installation
This guide covers how to Altinity.Cloud Anywhere to install a Kubernetes ClickHouse environment on the Google Cloud Platform Google Kubernetes Engine (GKE).
This page assume you have an Altinity.Cloud account and have requested an Anywhere environment, and have a developer Google Console environment set up.
In the Google Console, you must ENABLE the following SDKs for your project:
Compute Engine API
Kubernetes Engine API
Software Requirements
As a client computer (or cloud machine instance) is being used from the terminal to perform the installation instructions on this page, the following software items must first be installed and various configurations completed.
For the terminal
Terminal (SSH) with certificate to log into the client computer
This section covers the setup and configuration of the Google command line software, configuration and how to create a Kubernetes container and GKE cluster.
4.2.2.1 - Installing GKE
Installing Google GKE from the terminal.
8 May 2023 · Read time 5 min
Introduction
This section covers the creation of a GKE Kubernetes container and cluster.
We start with this section with the assumption that you already have a Google Cloud account and know how to create a kubernetes environment. Included are links to installation sections that guide you through the process of installing the Google development environment. When you finish this section, you will be ready to use the Altinity Cloud Manager to provision and manage ClickHouse clusters.
Prerequisites
Check that each of the items in the following list are complete:
Figure 1 - The command to create kubernetes-1 and Google’s response.
Create a Google cluster
The clusters create command creates a GKE cluster inside the Google kubernetes container.
Altinity then uses this cluster to set up ClickHouse when you use the Connection Setup wizard.
Use your browser to review the Google Kubernetes console to see the new cluster.
Figure 2 - Running the clusters create command. The blue-highlighted items in the example screenshot are example values that you can alter by following Altinity recommendations.
Credential Setup
The get-credentials command sets up your local config file so that kubectl commands are authorized to talk to the Google GKE.
Once the cluster is ready, use the following get-credentials command to allow kubectl to issue commands to Kubernetes. Highlighted in blue in the terminal screenshot is the name of the cluster cluster-1, the region us-west1, and the project name in this example any-test-gke.
NOTE:Figure 1 shows the project name as any-test-gke.
To authorize kubectl commands to access Google clusters:
Copy and paste the following command to your terminal:
For Linux Debian (Ubuntu) installations, install the following terminal commands on your client computer, entering the commands one at a time.
For more details on each line, jump to the specific sections on this page:
# Certificates Install 1sudo apt-get install apt-transport-https ca-certificates gnupg
# Response# ----------# Reading package lists... Done# Building dependency tree # Reading state information... Done# ca-certificates is already the newest version (20211016ubuntu0.20.04.1).# gnupg is already the newest version (2.2.19-3ubuntu2.2).# apt-transport-https is already the newest version (2.0.9).# 0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded.
Add Google CLI packages
Add the gcloud CLI distribution URI as a package source to your local OS sources list.
(Linux Debian example.)
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add –
# Response# ---------- % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
00000000 --:--:-- --:--:-- --:--:-- 0gpg: can't open '–': No such file or directory
100121010012100062050 --:--:-- --:--:-- --:--:-- 6237(23) Failed writing body
ubuntu@ip-172-31-16-238:~$
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add –
# Response# ---------- % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
00000000 --:--:-- --:--:-- --:--:-- 0gpg: can't open '–': No such file or directory
100121010012100059600 --:--:-- --:--:-- --:--:-- 5960(23) Failed writing body
ubuntu@ip-172-31-16-238:~$ sudo apt-get update && sudo apt-get install google-cloud-cli
Hit:1 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal InRelease
Get:2 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]Get:3 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB]Hit:4 https://download.docker.com/linux/ubuntu bionic InRelease
Get:5 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]Get:6 https://packages.cloud.google.com/apt cloud-sdk InRelease [6361 B]Get:7 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages [2534 kB]Get:8 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/universe amd64 Packages [1059 kB]Get:9 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/universe amd64 c-n-f Metadata [24.2 kB]Hit:10 https://packages.clickhouse.com/deb stable InRelease
Get:11 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages [2151 kB]Get:12 https://packages.cloud.google.com/apt cloud-sdk/main amd64 Packages [438 kB]Get:13 http://security.ubuntu.com/ubuntu focal-security/universe amd64 Packages [834 kB]Get:14 http://security.ubuntu.com/ubuntu focal-security/universe amd64 c-n-f Metadata [17.6 kB]Fetched 7401 kB in 1s (5429 kB/s)Reading package lists... Done
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:2
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:3
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:4
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:5
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:6
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:7
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:8
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:9
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list.d/google-cloud-sdk.list:1 and /etc/apt/sources.list.d/google-cloud-sdk.list:10
Reading package lists... Done
Building dependency tree
Reading state information... Done
Suggested packages:
google-cloud-cli-app-engine-java google-cloud-cli-app-engine-python google-cloud-cli-pubsub-emulator google-cloud-cli-bigtable-emulator google-cloud-cli-datastore-emulator kubectl
The following packages will be upgraded:
google-cloud-cli
1 upgraded, 0 newly installed, 0 to remove and 5 not upgraded.
Need to get 154 MB of archives.
After this operation, 1508 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt cloud-sdk/main amd64 google-cloud-cli all 430.0.0-0 [154 MB]Fetched 154 MB in 3s (56.6 MB/s)(Reading database ... 164975 files and directories currently installed.)Preparing to unpack .../google-cloud-cli_430.0.0-0_all.deb ...
Unpacking google-cloud-cli (430.0.0-0) over (429.0.0-0) ...
Setting up google-cloud-cli (430.0.0-0) ...
Processing triggers for man-db (2.9.1-1) ...
ubuntu@ip-172-31-16-238:~$
Install gcloud auth plugin
Installs the Google auth plugin.
Used to manage authentication between the Altinity client and the Google Kubernetes Engine.
sudo apt-get install google-cloud-sdk-gke-gcloud-auth-plugin
# Response# ----------Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
google-cloud-sdk-gke-gcloud-auth-plugin
0 upgraded, 1 newly installed, 0 to remove and 5 not upgraded.
Need to get 3129 kB of archives.
After this operation, 11.0 MB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt cloud-sdk/main amd64 google-cloud-sdk-gke-gcloud-auth-plugin amd64 430.0.0-0 [3129 kB]Fetched 3129 kB in 0s (7282 kB/s)Selecting previously unselected package google-cloud-sdk-gke-gcloud-auth-plugin.
(Reading database ... 165046 files and directories currently installed.)Preparing to unpack .../google-cloud-sdk-gke-gcloud-auth-plugin_430.0.0-0_amd64.deb ...
Unpacking google-cloud-sdk-gke-gcloud-auth-plugin (430.0.0-0) ...
dpkg: error processing archive /var/cache/apt/archives/google-cloud-sdk-gke-gcloud-auth-plugin_430.0.0-0_amd64.deb (--unpack):
trying to overwrite '/usr/lib/google-cloud-sdk/.install/gke-gcloud-auth-plugin.snapshot.json', which is also in package google-cloud-cli-gke-gcloud-auth-plugin 429.0.0-0
dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)Errors were encountered while processing:
/var/cache/apt/archives/google-cloud-sdk-gke-gcloud-auth-plugin_430.0.0-0_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)ubuntu@ip-172-31-16-238:~$
You enter the gcloud auth login command from your terminal as shown in Figure 1.
Google provides you with a browser link.
You copy the Authorization code from the Google authentication page.
You paste the code into your terminal.
You are now authenticated and can check your status with the gcloud config list command.
To login to your Google account from a terminal:
Log in with your Google account using the command:
gcloud auth login
The resulting link URL that is displayed, copy and paste it to your browser.
Copy the authorization code string.
Return to the terminal, and at the Enter authorization code prompt, paste in the string as shown in the following terminal screenshot.
Check which account you are logged into with:
gcloud config list
Figure 1 - Log into your Google account using gcloud auth login command on your terminal.
4.2.2.4 - Setting the Google Project ID
How to use the Altinity Cloud Manager Connection Wizard to provision a ClickHouse-ready environment to your Google Cloud Platform Google (GCP) Kubernetes Engine (GKE).
8 May 2023 · Read time 1 min
Setting your Google Project ID
The Google Project ID manages which environment to use for tracking and billing purposes. This must be set first before you begin the connection and provisioning process.
Check your Google Project ID
From your Google console, choose your project ID from the menu in your web browser and check to make sure that is selected in the terminal as follows:
If your Google Project ID is NOT listed, then list and set your Google project ID(s) by running:
gcloud projects list
# Example responsePROJECT_ID NAME PROJECT_NUMBER
any-test-gke any-test-gke 1234567890# Set project ID examplegcloud config set project any-test-gke
4.2.3 - Altinity Cloud Manager Connection Setup
Altinity Cloud Manager Connection Setup.
The Altinity Cloud Manager includes a Connection Setup wizard that displays any time a new Environment is created that has not yet been connected. This section covers the use of the Connection Setup wizard, how to install the altinitycloud-connect command line software, and how to create a ClickHouse cluster and database.
4.2.3.1 - Altinity Connect Setup Wizard
How to use the Altinity Cloud Manager Connection Wizard to provision a ClickHouse-ready environment to your Google Cloud Platform Google (GCP) Kubernetes Engine (GKE).
8 May 2023 · Read time 4 min
Introduction
This section shows you how to create a secure connection between your Google GKE environment and the Altinity Cloud Manager using the Altinity.Cloud Anywhere Connection Setup wizard on your web browser.
Included is the free altinitycloud-connect software, which is a tunneling daemon software that is part of Altinity.Cloud Anywhere, which allows the Altinity Cloud Manager to communicate with your GKE-hosted ClickHouse cluster.
An altinitycloud-connect login token is provided for the connection
The provisioning step uses deployment script to configure your GKE environment
Use the watch command or k9s monitoring tool to view the progress of the altinity-cloud nodes as you start the connection and provisioning process.
Setting your Google Project ID
The Google Project ID manages which environment to use for tracking and billing purposes. This must be set first before you begin the connection and provisioning process.
Check your Google Project ID
From your Google console, choose your project ID from the menu in your web browser and check to make sure that is selected in the terminal as follows:
If your Google Project ID is NOT listed
If your Google Project ID is NOT listed, then list and set your Google project ID(s) by running:
gcloud projects list
# Example responsePROJECT_ID NAME PROJECT_NUMBER
any-test-gke any-test-gke 1234567890# Set project ID examplegcloud config set project any-test-gke
Check if altinitycloud-connect is installed
To verify you have altinitycloud-connect installed, run the following command:
altinitycloud-connect
# Example Response0.20.0
# Installation locationwhere altinitycloud-connect
altinitycloud-connect is /usr/local/bin/altinitycloud-connect
Running the Connection Setup Wizard
In the Altinity Cloud Manager, the Connection Setup wizard is located in the Environments section of the ACM.
This instruction assumes that you have either:
Asked Altinity to provide you with an Environment name (Example: gkeanywhere).
You have been given Altinity.Cloud Anywhere access and you can create your own environment name.
As shown in Figure 1, the Connection Setup wizard displays this screen when the selected environment (Example: gkeanywhere which is available from the top right Environment menu) is not yet a connection between the ACM and GKE.
Response
The response appears similar to the following:
namespace/altinity-cloud-system created
namespace/altinity-cloud-managed-clickhouse created
clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:storage-class-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:cloud-connect unchanged
serviceaccount/cloud-connect created
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:storage-class-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view unchanged
rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
secret/cloud-connect created
deployment.apps/cloud-connect created
2 of 3 Resources Configuration
Confirm the following settings then select the green PROCEED button:
Figure 2 - The Resources Configuration screen.
Cloud Provider = GCP
Storage Classes = premium-rwo
Storage Classes = standard
Storage Classes = standard-rwo
Node Pools:
Zone = us-west1-b and Instance Type = n2-standard-4 (2)
Capacity = 10 GB (this is an example setting)
Used for: (checkmark each of these items)
ClickHouse (checked on)
Zookeeper (checked on)
System (checked on)
The tunneling daemon
3 of 3 Confirmation
Review the JSON data then select the green Finish button.
Note: A message saying “Connection is not ready yet.” you can select “Continue waiting…” until the next screen appears.
Confirm the following settings then select the green FINISH button:
Figure 3 - The Confirmation screen showing the Resources Specification JSON.
4.2.3.2 - Installing altinitycloud-connect
How to use the Altinity Cloud Manager Connection Wizard to provision a ClickHouse-ready environment to your Google Cloud Platform Google (GCP) Kubernetes Engine (GKE).
8 May 2023 · Read time 1 min
Install altinitycloud-connect
Altinity.Cloud Anywhere includes altinitycloud-connect, a tunneling daemon that creates a secure connection between Google GKE Kubernetes environmenet and the Altinity Cloud Manager.
Install altinitycloud-connect from the following links:
To verify you have altinitycloud-connect installed, run the following command:
altinitycloud-connect version
# Example Response0.20.0
# Installation locationwhere altinitycloud-connect
altinitycloud-connect is /usr/local/bin/altinitycloud-connect
Command-line help
Running the altinitycloud-connect command with no parameters, displays the following options.
altinitycloud-connect
Usage:
cloud-connect [flags] cloud-connect [command]Available Commands:
completion Generate the autocompletion script for the specified shell
kubernetes Print Kubernetes manifest
kubernetes-disconnect Print Kubernetes disconnect manifest
login Log in
version Print version
Flags:
--ca-crt string /path/to/custom/ca.crt (defaults to $ALTINITY_CLOUD_CACERT) --capability strings List of capabilities. Supported: aws, gcp, kubernetes (includes all by defaults) --debug-addr string Address to serve /metrics & /healthz on (default ":0") -i, --input string /path/to/cloud-connect.pem produced by login command(default "cloud-connect.pem") -u, --url string URL to connect to (defaults to $ALTINITY_CLOUD_URL, and if not specified, to https://anywhere.altinity.cloud)(default "https://anywhere.altinity.cloud")Use "cloud-connect [command] --help"for more information about a command.
4.2.3.3 - Creating a ClickHouse Cluster
How to create a new ClickHouse database cluster using the Altinity Cloud Manager inside your Google Cloud Kubernetes Environment (GKE).
7 May 2023 · Read time 2 min
Creating a ClickHouse Cluster
These section covers how to use the Altinity Cloud Manager to create a ClickHouse cluster your your Google GKE Kubernetes environment.
To create a cluster (see Figure 1 for reference):
Use the The top left Environment menu to selects where your Google GKE environment is located. In this example as shown in Figure 1, the environment name is gkeanywhere.
Select Clusters from the navigation menu.
Select the LAUNCH CLUSTER blue button to launch the wizard.
The cluster panel is created, called test-gcp-anyw.
When the cluster has started, the status indicators shown in green will appear. These are nodes online and checks passed.
Figure 1 - The Clusters dashboard showing the ClickHouse cluster named test-gcp-anyw created in your Google GKE Kubernetes environment.
First time creating a cluster
If this is the first time you are viewing the Altinity Cloud Manager Clusters page, there will be no clusters, the screen will appear as in shown in Figure 2. The following steps lead you through the screens displayed by the Cluster Wizard. To create a new ClickHouse Cluster using the Launch Cluster wizard):
Figure 2 - The Clusters dashboard showing the ClickHouse cluster named test-gcp-anyw created in your Google GKE Kubernetes environment.
NOTE: Each of the 6 steps in the Wizard you can navigate back forth between the previously filled-in screens by selecting the title links on the left, or using the BACK and NEXT buttons.
To create a new ClickHouse cluster:
From your web browser in the Altinity Cloud Manager, select Clusters.
Select the LAUNCH CLUSTERS blue button.
In step 1, the ClickHouse Setup screen, fill in the following and select the blue NEXT button:
Name = test-gcp-anyw (15-character limit, lower-case letters only)
ClickHouse Version = ALTINITY BUILDS: 22.8.15 Stable Build
ClickHouse User Name = admin
ClickHouse User Password = admin-password (example password) then select NEXT.
In step 2, the Resources Configuration screen, fill in the following then select NEXT button:
Node Type = n2-standard-4 (CPU x4, RAM 13 GB)
Node Storage = 50 GB
Volume Type = premium-rwo
Number of Shards = 1 then select NEXT.
In step 3, the High Availability Configuration screen, fill in the following then select NEXT:
Number of Replicas = 1
Zookeeper Configuration = Dedicated
Zookeeper Node Type = default
Backup Schedule = Monthly, Day of Week/Month = 1, Time (GMT) = 05:00 AM, Backups to Keep = 7
Number of Backups to keep = 0 (leave blank) then select NEXT.
In step 4, Connection Configuration screen, fill in the following then select NEXT:
Protocols: Binary Protocol (port:9440) - is checked ON
Protocols: HTTP Protocol (port:8443) - is checked ON
Datadog integration = disabled (greyed out, ask Altinity to enable)
IP restrictions = OFF (Enabled is unchecked)
In step 5, Uptime Schedule screen, select ALWAYS ON then select NEXT.
In step 6, the final screen Review & Launch, select the green LAUNCH button.
Your new ClickHouse Cluster will start building, and will complete with the green boxes under your cluster name test-gcp-anyw;
1 / 1 nodes online
Health: 6/6 checks passed
4.2.3.4 - Creating a ClickHouse Database
How to use the Altinity Cloud Manager (ACM) to create a ClickHouse database on a Google Kubernetes (GKE) cluster.
7 May 2023 · Read time 3 min
Introduction
In this section you will create a ClickHouse database and tables on your Google GKE-cluster using the ACM. You will then use your cluster’s Explore menu in the ACM to run the database-creation scripts and queries. Finally, you will use the clickhouse-client command line tool from your local terminal using the Connection Details string to test data-retrieval queries.
Creating a ClickHouse Database
The following steps you will use the clusters EXPLORE menu in the Query tab.
Figure 1 - Using the Cluster > EXPLORE > Query tab to create and query ClickHouse databases and tables.
To create a new database on your Altinity.Cloud Anywhere cluster from the ACM:
Login to the ACM and select Clusters, then select EXPLORE on your cluster.
In the Query text box, enter the following CREATE TABLE SQL query:
The connection string comes from your cluster (Example: test-gcp-anyw) Connection Details link.
The Copy/Paste for client connections string highlighted in red in Figure 2 is used in your terminal (you supply the password; Example: adminpassword)
Figure 2 - Using the Cluster > EXPLORE > Query tab to create and query ClickHouse databases and tables.
Find your pod name:
kubectl -n altinity-cloud-managed-clickhouse get all
# ResponseNAME READY STATUS RESTARTS AGE
pod/chi-test-anywhere-6-johndoe-anywhere-6-0-0-0 2/2 Running 8(3h25m ago) 2d17h
On your command line terminal, login to that pod using the name you got from step 1:
Login to your ClickHouse database using the clickhouse-client command to get the :) happy face prompt:
clickhouse@chi-test-anywhere-6-johndoe-anywhere-6-0-0-0:/$
clickhouse@chi-test-anywhere-6-johndoe-anywhere-6-0-0-0:/$ clickhouse-client
# Response<jemalloc>: MADV_DONTNEED does not work (memset will be used instead)<jemalloc>: (This is the expected behaviour if you are running under QEMU)ClickHouse client version 22.8.13.21.altinitystable (altinity build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 22.8.13 revision 54460.
test-anywhere-6 :)
Run a show tables sql command:
test-anywhere-6 :) show tables
# ResponseSHOW TABLES
Query id: da01133d-0130-4b98-9090-4ebc6fa4b568
┌─name─────────┐
│ events │
│ events_local │
└──────────────┘
2 rows in set. Elapsed: 0.013 sec.
Run SQL query to show data in the events table:
test-anywhere-6 :) SELECT * FROM events;# ResponseSELECT *
FROM events
Query id: 00fef876-e9b0-44b1-b768-9e662eda0483
┌─event_date─┬─event_type─┬─article_id─┬─title───┐
│ 2023-03-24 │ 1 │ 13 │ Example │
└────────────┴────────────┴────────────┴─────────┘
1 row in set. Elapsed: 0.023 sec.
test-anywhere-6 :)
Review the following database creation and query instructions:
The screenshot of the Google Console Welcome page is shown in Figure 1. Points of interest are marked in red.
The menu at the top showing proj-anywhere-gke is where you can switch to different projects
Clicking on the proj-anywhere-gke link is where you create a new project or select other projects
The Create a GKE cluster is the web console method of creating a Google GKE cluster
The Billing button is where you must set up a credit card for your project
The Kubernetes Engine section is where the terminal-created Kubernetes network will appear
the Compute Engine section is where the nodes in your Altinity-created ClickHouse cluster appears.
the Compute Engine section is where the nodes in your Altinity-created ClickHouse cluster appears.
Figure 1 - Google GKE Kubernetes Console web page.
A Google Project ID is first create in your Google Console. Create a Google Project with the NEW PROJECT button, and set up Billing.
Once you have a project name, you can select that in the terminal and complete the steps for creating a Kubernetes network and starting cluster that you will then connect to the Altinity.Cloud.
Kubernetes Engine
From the home page, when you select the Kubernetes Engine button, you will see a page that displays the cluster will be created from the terminal after following the instructions on this page.
Figure 2 - The Google Kubernetes Engine page shows the installed instance of the Altinity-installed Kubernetes.
Compute Engine
From the home page, when you select the Compute Engine button, you will see a page that displays all the nodes created from the terminal. The names matches what you see in the k9s monitoring windows that view the Altinity and ClickHouse Kubernetes namespaces.
Figure 3 - The Google Compute Engine page shows the installed instances of the Altinity ClickHouse Nodes.
4.2.4.2 - kubectl Commands
Installing from the terminal.
8 May 2023 · Read time 8 min
Installing kubectl
To use gcloud to install kubectl according to the Google GKE instructions:
This is an expanded listing using the cluster info command (many hundreds of lines).
kubectl cluster-info dump
# Example response for a very long cluster information dump# -----------------------------------------------------------# {# "kind": "NodeList",# "apiVersion": "v1",# "metadata": {# "resourceVersion": "8685921"# },# "items": [# {# "metadata": {# "name": "gke-cluster-1-default-pool-36e9706c-0fxb",# "uid": "0b89edcc-d46b-4783-84f9-a7672f0bd922",# ...# ... several hundred lines# ...# 13. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so# (version 21.8.10.1.altinitystable (altinity build))# 2023.04.14 06:24:31.919921 [ 115 ] {} <Debug> DNSResolver: Updated DNS cache# 2023.04.14 06:24:35.915268 [ 54 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 97.44 GiB.# ==== END logs for container clickhouse of pod default/chi-first-first-1-1-0 ====
kubectl exec - Enter a ClickHouse pod
To enter the pod and run the ClickHouse client directly, first locate the nodename of the cluster using the watch or k9s or find it from the ACM.
kubectl exec -it chi-first-first-0-0-0 -- bash
# You are now inside the pod, run a list command:root@chi-first-first-0-0-0:/# ls
## bin boot cloud-connect.pem dev docker-entrypoint-initdb.d entrypoint.sh etc home kubectl kubectl.sha256 lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var# To exit out of the podexit# ubuntu@ip-123-45-67-890:~$
kubectl get ns
This lists the currently registered Kubernetes namespaces in the current cluster-1 using the kubectl get ns command.
Figure 3 - Running the kubectl get ns command to list all of the namespaces.
kubectl get pod
List the CPU pods in a ClickHouse cluster.
kubectl get pod -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName
# Example response to list pods# ------------------------------# NAME STATUS NODE# chi-first-first-0-0-0 Running gke-cluster-1-default-pool-36e9706c-xj7p# chi-first-first-0-1-0 Running gke-cluster-1-default-pool-aa3988ca-nth7# chi-first-first-1-0-0 Running gke-cluster-1-default-pool-36e9706c-0fxb# chi-first-first-1-1-0 Running gke-cluster-1-default-pool-36e9706c-wrbm
kubectl get pvc
List the storage volumes.
kubectl get pvc
# Example response to list volumes# --------------------------------# NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE# pd-ssd-chi-first-first-0-0-0 Bound pvc-5bc72a03-2ae5-41d1-9e93-92b92829c435 100Gi RWO premium-rwo 8d# pd-ssd-chi-first-first-0-1-0 Bound pvc-ec8f143d-c51d-4125-938a-76ad103fb7f2 100Gi RWO premium-rwo 8d# pd-ssd-chi-first-first-1-0-0 Bound pvc-014d010b-d282-4b47-91ef-b332bd381a28 100Gi RWO premium-rwo 8d# pd-ssd-chi-first-first-1-1-0 Bound pvc-c11d5819-1935-4b6f-ad54-60fa196fe013 100Gi RWO premium-rwo 8d
kubectl get all -n zoo1ns
To list the Zookeeper nodes:
kubectl get all -n zoo1ns
# Example response to list zookeeper nodes and services# -----------------------------------------------------# NAME READY STATUS RESTARTS AGE# pod/zookeeper-0 1/1 Running 0 8d# # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE# service/zookeeper ClusterIP 10.72.131.73 <none> 2181/TCP,7000/TCP 8d# service/zookeepers ClusterIP None <none> 2888/TCP,3888/TCP 8d# # NAME READY AGE# statefulset.apps/zookeeper 1/1 8d
To verify the config file is updated with the correct credentials, review it by running the kubctl config view command.
Figure 1 - Running the kubctl config view.svg command to verify that the config file is updated with credentials.
gcloud container clusters list
This lists the current container information with the gcloud container clusters list command.
Figure 2 - Running the gcloud container clusters list command.
kubectl clusterinfo
Run the kubectl clusterinfo command to list the Kubernetes control plan and services.
At this point the Google set up is complete.
Now you can use Altinity.Cloud Anywhere to connect Google GKE to the Altinity Cloud Manager.
Figure 4 - Running the kubectl clusterinfo command.
kubectl version
The computer or cloud compute instance that you use to communicate with Google Cloud requires installation of the Google CLI and Kubernetes.
The following list of software needs to be installed:
kubectl (kubectl get namespaces)
version checks that some items you do not have
Checking Versions
To make sure the prerequisites have been met, check the versions of the installed software.
# Version checkskubectl version --short # v1.27.1cat /etc/os-release # Ubuntu 20.04altinitycloud-connect version # Altinity 0.20.0gcloud version # Google Cloud SDK 429.0.0
kubectl version --short
# Client Version: v1.26.3# Kustomize Version: v4.5.7# Unable to connect to the server: net/http: TLS handshake timeout# Another variation to display versionkubectl version --output=yaml
4.2.4.3 - Miscellaneous terminal commands
How to install Altinity.Cloud Anywhere on Google Cloud Platform Google Kubernetes Engine (GKE).
8 May 2023 · Read time 1 min
Check the OS version
To to terminal software installations, you will need to know what operating system is being used so you can choose the correct binaries.
Displays Kubernetes roles and resources that the kubect apply will use run the following command and save the output from your terminal as a text file to read.
Use a watch command when you want to monitor node activity of the altinity-cloud namespaces in real time. This is useful for installations that are taking a long time, and you wish to watch the provisioning process.
Run the watch commands on the two altinity-cloud prefixed namespaces using the following commands:
watch -n altinity-cloud-system get all
watch -n altinity-cloud-managed-clickhouse get all
watch kubectl -n altinity-cloud-system get all
# Example response# ---------------------------Every 2.0s: kubectl -n altinity-cloud-system get all john.doe-MacBook-Pro.local: Sun Mar 19 23:03:18 2023NAME READY STATUS RESTARTS AGE
pod/cloud-connect-d6ff8499f-bkc5k 1/1 Running 0 10h
pod/crtd-665fd5cb85-wqkkk 1/1 Running 0 10h
pod/edge-proxy-66d44f7465-t9446 2/2 Running 0 10h
pod/grafana-5b466574d-vvt9p 1/1 Running 0 10h
pod/kube-state-metrics-58d86c747c-7hj79 1/1 Running 0 10h
pod/node-exporter-762b5 1/1 Running 0 10h
pod/prometheus-0 1/1 Running 0 10h
pod/statuscheck-f7c9b4d98-2jlt6 1/1 Running 0 10h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/edge-proxy ClusterIP 10.109.2.17 <none> 443/TCP,8443/TCP,9440/TCP 10h
service/edge-proxy-lb LoadBalancer 10.100.216.192 <pending> 443:31873/TCP,8443:32612/TCP,9440:31596/TCP 10h
service/grafana ClusterIP 10.108.24.91 <none> 3000/TCP 10h
service/prometheus ClusterIP 10.102.103.141 <none> 9090/TCP 10h
service/prometheus-headless ClusterIP None <none> 9090/TCP 10h
service/statuscheck ClusterIP 10.101.224.247 <none> 80/TCP 10h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/node-exporter 11111 <none> 10h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/cloud-connect 1/1 11 10h
deployment.apps/crtd 1/1 11 10h
deployment.apps/edge-proxy 1/1 11 10h
deployment.apps/grafana 1/1 11 10h
deployment.apps/kube-state-metrics 1/1 11 10h
deployment.apps/statuscheck 1/1 11 10h
NAME DESIRED CURRENT READY AGE
replicaset.apps/cloud-connect-d6ff8499f 111 10h
replicaset.apps/crtd-665fd5cb85 111 10h
replicaset.apps/edge-proxy-66d44f7465 111 10h
replicaset.apps/grafana-5b466574d 111 10h
replicaset.apps/grafana-6478f89b7c 000 10h
replicaset.apps/kube-state-metrics-58d86c747c 111 10h
replicaset.apps/statuscheck-f7c9b4d98 111 10h
NAME READY AGE
statefulset.apps/prometheus 1/1 10h
Figure 1 - The watch monitoring window for the namespaces altinity-cloud-system listing each node name, IP address, and the run status.
watch kubectl -n altinity-cloud-managed-clickhouse get all
# Example response# ---------------------------Every 2.0s: kubectl -n altinity-cloud-managed-clickhouse get all john.doe-MacBook-Pro.local: Mon Mar 20 00:14:44 2023NAME READY STATUS RESTARTS AGE
pod/chi-test-anywhere-6-test-anywhere-6-0-0-0 2/2 Running 0 11h
pod/clickhouse-operator-996785fc-rgfvl 2/2 Running 0 11h
pod/zookeeper-5244-0 1/1 Running 0 11h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/chi-test-anywhere-6-test-anywhere-6-0-0 ClusterIP 10.98.202.85 <none> 8123/TCP,9000/TCP,9009/TCP 11h
service/clickhouse-operator-metrics ClusterIP 10.109.90.202 <none> 8888/TCP 11h
service/clickhouse-test-anywhere-6 ClusterIP 10.100.48.57 <none> 8443/TCP,9440/TCP 11h
service/zookeeper-5244 ClusterIP 10.101.71.82 <none> 2181/TCP,7000/TCP 11h
service/zookeepers-5244 ClusterIP None <none> 2888/TCP,3888/TCP 11h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/clickhouse-operator 1/1 11 11h
NAME DESIRED CURRENT READY AGE
replicaset.apps/clickhouse-operator-996785fc 111 11h
NAME READY AGE
statefulset.apps/chi-test-anywhere-6-test-anywhere-6-0-0 1/1 11h
statefulset.apps/zookeeper-5244 1/1 11h
Figure 2 - The watch monitoring window for the namespace altinity-cloud-managed-clickhouse, listing each node name, IP address, and the run status.
K9S Real-Time Monitoring
K9s is similar to the watch command for monitoring nodes in real time, but displayed in color and in a smaller interactive window, K9S is a free utility that lets you monitor in real time the progress of a provisioning installation.
To open monitoring windows for each altinity-cloud namespaces, open a new terminal instance and run the k9s command:
Figure 3 - The K9S monitoring windows for the two namespaces altinity-cloud-system and altinity-cloud-managed-clickhouse listing each node name, IP address, and the run status.
4.2.4.4 - Maintenance Tasks
How to install Altinity.Cloud Anywhere on Google Cloud Platform Google Kubernetes Engine (GKE).
How to rescale a GKE-hosted ClickHouse Cluster using the Altinity Cloud Manager
To see detail instructions with screenshots to rescale your GKE cluster using the Altinity Cloud Manager cluster tools, follow the instructions on this page:
Use the Altinity Cloud Manager menu in your cluster: Actions 》Rescale to change:
CPU
Node Storage size
Volumes
Number of Shards
Number of Replicas
To rescale your GKE cluster using the Altinity Cloud Manager cluster tools:
Select Clusters from the ACM left pane then select a running cluster to rescale.
Select the menu ACTIONS 》Rescale item.
In the Rescale Cluster window, adjust the following settings as needed in the column labelled Desired:
Number of Shards (Example: 2)
Number of Replicas (Example: 2)
Node Type (Example: n2d-standard-32)
Node Storage (GB) > (Example: 50)
Number of Volumes > (Example: 2)
Select OK, then CONFIRM at the Rescale Confirmation window.
Confirm that the new values appear in your cluster dashboard panel.
NOTE: Cluster Node Storage size may not be decreased, only increased by at least 10%.
How to reset your Anywhere environment
Resetting your Altinity.Cloud anywhere ClickHouse cluster from the ACM and your GKE environment will let you create a new connection.
In the Environment section, selecting your Anywhere environment name displays the Connection Setup wizard.
use the ACM Reset Anywhere function, then run the terminal commands to delete the ClickHouse
In the ACM, select Environments from the left-hand navigation pane.
From the environment menu located beside your login name at the top right of the ACM, select your environment name.
In the ACTION menu, select Reset Anywhere.
The result is that you will see the Anywhere Connection Setup screen and provisioning wizard that shows you the connection string to copy and paste to deploy a new Anywhere environment.
How to delete your Anywhere cluster
This section covers how to delete your GKE cluster using the ACM’s Reset Anywhere function, then removing the altinity-cloud namespaces from your GKE environment.
Check your namespaces to confirm that the altinity-cloud namespaces are present.
kubectl get ns
NAME STATUS AGE
altinity-cloud-managed-clickhouse Active 12d
altinity-cloud-system Active 12d
default Active 12d
kube-node-lease Active 12d
kube-public Active 12d
kube-system Active 12d
To delete ClickHouse services and altinity-cloud namespaces, run the following commands in sequence:
This page shows how the Altinity Cloud Manager with an Altinity.Anywhere installation to remotely rescale a customer’s on-prem cluster.
Select a cluster, then use the Actions > Rescale menu to bring up the Rescale Cluster window, then in the Desired Cluster Size, change the Number of Shards from 1 to 2 then press OK, then CONFIRM.
Figure 1 - Selecting Actions > Rescale from the cluster to modify.
Figure 2 - Changing the number of Shards from 1 to 2.
Figure 3 - Rescale confirmation.
Figure 4 - Nodes in the process of rescaling.
Verify rescale from the terminal
The Ubuntu host the Kubernetes installation is installed on shows the various commmands use to verify the changes made from the ACM.
The nodes online pill box will show grey, 2/4 nodes online, then after several minutes, turn green showing 4/4 nodes online. If you do not see the grey 2/4 nodes online, and the nodes online is green and shows 2/2 nodes online, try the rescale operation again.
Use the command kubectl -n altinity-cloud-managed-clickhouse... showing the Altinity clusters before the rescale operation.
Figure 5 - Kubernetes command kubectl -n <Altinity cluster name> running on-prem, that is managed from also by the ACM.
Figure 6 - The newly added nodes …-demo-1-0-0 after the rescale operation are now listed, showing Pending.
Ubuntu command kubectl get nodes before the rescale operation.
Figure 7 - Kubernetes command kubectl get nodes shows all the nodes on the Altinity ClickHouse cluster.
Figure 8 - The pending node is added as 192.168.149.238 and is spinning up.
The newly spun up shard in cluster-x now reads 4/4 nodes online.
Figure 9 - The Altinity Cloud Manager showing the remotely managed cluster-y with 4/4 nodes online.
4.3 - Minikube Installation (for test or development only)
How to install Altinity.Cloud Anywhere on Minikube. For testing and development use only.
24 April 2023 · Read time 30 min
Overview - Minikube Installation (for testing and development use only)
This guide covers the installation of Minikube in your own Kubernetes environment by using Altinity.Cloud Anywhere to do the provisioning. Any computer or cloud instance that can run Kubernetes and Minikube will work. Note that while Minicube is ok to use for development purposes, it should not be used for production.
These instructions have been tested on:
Ubuntu 22.04 server
Windows 10 with WS2 Ubuntu 20.04
VMWare running Ubuntu on Intel & M1 ARM
M1 Silicon Mac running Monteray (v12.6.3) and Ventura (v13.3.1)
Intel Mac running Big Sur (v11.7.4)
Requirements
The following Altinity.Cloud service subscriptions are needed:
Server requirements
Minikube needs a minimum of 2 processors. Allocate RAM and disk space to accommodate your clusters. Check the values by running the terminal commands (Example: lscpu).
Minimum of 2 CPU (lscpu, or sysctl -a [for Mac])
Minimum 8 GB RAM (grep MemTotal /proc/meminfo )
30 GB disk space ( df -h)
The following software must first be installed on your Minikube installation:
From a terminal, check the versions of all the installed software by running each command in turn.
Checking versions
To make sure you have the required software installed, check the versions for each using the following commands:
docker --version
docker-machine --version
docker-compose --version
minikube version
kubectl version -o json
watch --version
k9s version
altinitycloud-connect version
Starting Minikube
From the terminal, run the command:
minikube start
Linux ARM Ubuntu 22.04
This is Minikube’s response from an Ubuntu 22.04 server running on ARM:
# minikube start😄 minikube v1.30.1 on Ubuntu 22.04 (arm64)✨ Using the qemu2 driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing qemu2 VM for"minikube" ...
🐳 Preparing Kubernetes v1.26.3 on Docker 20.10.23 ...
🔗 Configuring bridge CNI (Container Networking Interface) ...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎 Verifying Kubernetes components...
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Linux ARM Apple Macintosh M1
This is Minikube’s response from a Mac running Ventura:
# minikube start😄 minikube v1.29.0 on Darwin 13.2.1 (arm64)✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🏃 Updating the running docker "minikube" container ...
🐳 Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
🔗 Configuring bridge CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Linux Windows Intel
This is Minikube’s response from a Microsoft Windows system running Ubuntu:
# minikube start😄 minikube v1.30.1 on Ubuntu 20.04
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔄 Restarting existing docker container for"minikube" ...
🐳 Preparing Kubernetes v1.26.3 on Docker 23.0.2 ...
🔗 Configuring bridge CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default"
Checking Minikube’s status
If you are not sure if Minikube is already running, run a status check as follows:
minikube status
# minikube# type: Control Plane# host: Running# kubelet: Running# apiserver: Running# kubeconfig: Configured
Checking the Kubernetes kubectl command
This step checks that the kubectl command works on your Minikube host. Running the kubectl get ns command lists the namespaces that are currently running on your Minikube server.
Run the kubectl namespace list command:
kubectl get ns
# Example response:# -------------------# NAME STATUS AGE# default Active 15d# kube-node-lease Active 15d# kube-public Active 15d# kube-system Active 15d
Altinity Connection Setup
To start the Connection Setup:
From the Altinity Cloud Manager, select the Environments section, then make sure you are in the correct environment by selecting it from the menu located at the top right of the screen.
In Figure 1 the Connection Setup step 2, Connect to Altinity.Cloud text box, select all the text.
In your Minikube terminal, copy and paste the text and press the return key.
A command prompt appears immediately.
Examplealtinitycloud-connect login token string from the Altinity Cloud Manager Connection Setup wizard step 2, Connect to Altinity.Cloud.
Starting the Provisioning
From Figure 1, in the Connection Setup screen step 3, Deploy connector to your Kubernetes cluster, copy the string and paste it into your terminal. This begins the provisioning process inside your Minikube Kubernetes environment.
Response
The response appears similar to the following:
namespace/altinity-cloud-system created
namespace/altinity-cloud-managed-clickhouse created
clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:storage-class-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view unchanged
clusterrole.rbac.authorization.k8s.io/altinity-cloud:cloud-connect unchanged
serviceaccount/cloud-connect created
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:node-metrics-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:storage-class-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/altinity-cloud:persistent-volume-view unchanged
rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
rolebinding.rbac.authorization.k8s.io/altinity-cloud:cloud-connect created
secret/cloud-connect created
deployment.apps/cloud-connect created
1 of 3 Connection Setup
From the Altinity Cloud Manager Connection Setup page, select the green PROCEED button.
Figure 1 - The Environments > Connection Setup screen.
2 of 3 Resources Configuration
Confirm the following settings, then select the green PROCEED button:
Figure 2 - The Resources Configuration screen.
NOTE: In Figure 2, if the table for the Node Pools section does not include a row for your Minikube server, select the ADD NODE POOL button and add the Zone name and Instance Type name and Capacity, and check each of the Used For checkboxes as shown.
Cloud Provider = Not Specified
Storage Classes = Standard
Node Pools:
Zone = minikube-zone-a
Instance Type = minikube-node
Capacity = 10 GB (this is an example setting)
Used for: (checkmark each of these items)
ClickHouse (checked on)
Zookeeper (checked on)
System (checked on)
Tolerations = dedicated=clickhouse:NoSchedule
3 of 3 Confirmation
In Figure 3, the Confirmation tab displays the Resources Specifications text box. Review these values and correct them if necessary by selecting the Resources Configuration tab to make changes.
To complete the Connection Setup wizard:
Select the green Finish button.
A progress bar and message appear: “Connection is not ready yet.”
Select “Continue waiting…” until the next screen appears.
Figure 3 - The Confirmation screen showing the Resources Specification JSON and the Connection is not ready yet message that appears until the connection to your Minikube is established.
In the Confirmation screen shown in Figure 3, an example Resources Specification JSON string appears with the names of the storageClasses, nodePools and instanceType, zone and capacity value.
The following step registers your Minikube label so that the ACM can find your ClickHouse Kubernetes server that Altinity.Cloud just provisioned for you.
Refer to the Resource Specification JSON for the values for instanceTypeminikube-node and zone name minikube-zone-a where they are set.
Run the following string from your Kubernetes host terminal.
To monitor in real time the progress of a provisioning installation, run the watch commands on the two altinity-cloud prefixed namespaces.
Running Watch command 1 of 2
To monitor the process of the provisioning, use the watch or k9s command utility to monitor altinity-cloud-system.
The display updates every 2 seconds.
watch kubectl -n altinity-cloud-system get all
Response
The result appears similar to the following display:
Every 2.0s: kubectl -n altinity-cloud-system get all john.doe-yourcomputer.local: Sun Mar 19 23:03:18 2023NAME READY STATUS RESTARTS AGE
pod/cloud-connect-d6ff8499f-bkc5k 1/1 Running 0 10h
pod/crtd-665fd5cb85-wqkkk 1/1 Running 0 10h
pod/edge-proxy-66d44f7465-t9446 2/2 Running 0 10h
pod/grafana-5b466574d-vvt9p 1/1 Running 0 10h
pod/kube-state-metrics-58d86c747c-7hj79 1/1 Running 0 10h
pod/node-exporter-762b5 1/1 Running 0 10h
pod/prometheus-0 1/1 Running 0 10h
pod/statuscheck-f7c9b4d98-2jlt6 1/1 Running 0 10h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/edge-proxy ClusterIP 10.109.2.17 <none> 443/TCP,8443/TCP,9440/TCP 10h
service/edge-proxy-lb LoadBalancer 10.100.216.192 <pending> 443:31873/TCP,8443:32612/TCP,9440:31596/TCP 10h
service/grafana ClusterIP 10.108.24.91 <none> 3000/TCP 10h
service/prometheus ClusterIP 10.102.103.141 <none> 9090/TCP 10h
service/prometheus-headless ClusterIP None <none> 9090/TCP 10h
service/statuscheck ClusterIP 10.101.224.247 <none> 80/TCP 10h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/node-exporter 11111 <none> 10h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/cloud-connect 1/1 11 10h
deployment.apps/crtd 1/1 11 10h
deployment.apps/edge-proxy 1/1 11 10h
deployment.apps/grafana 1/1 11 10h
deployment.apps/kube-state-metrics 1/1 11 10h
deployment.apps/statuscheck 1/1 11 10h
NAME DESIRED CURRENT READY AGE
replicaset.apps/cloud-connect-d6ff8499f 111 10h
replicaset.apps/crtd-665fd5cb85 111 10h
replicaset.apps/edge-proxy-66d44f7465 111 10h
replicaset.apps/grafana-5b466574d 111 10h
replicaset.apps/grafana-6478f89b7c 000 10h
replicaset.apps/kube-state-metrics-58d86c747c 111 10h
replicaset.apps/statuscheck-f7c9b4d98 111 10h
NAME READY AGE
statefulset.apps/prometheus 1/1 10h
Running Watch command 2 of 2
Open a second terminal window to monitor altinity-cloud-managed-clickhouse.
watch kubectl -n altinity-cloud-system get all
Response
The result appears similar to the following display:
Every 2.0s: kubectl -n altinity-cloud-managed-clickhouse get all john.doe-yourcomputer.local: Mon Mar 20 00:14:44 2023NAME READY STATUS RESTARTS AGE
pod/chi-test-anywhere-6-test-anywhere-6-0-0-0 2/2 Running 0 11h
pod/clickhouse-operator-996785fc-rgfvl 2/2 Running 0 11h
pod/zookeeper-5244-0 1/1 Running 0 11h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/chi-test-anywhere-6-test-anywhere-6-0-0 ClusterIP 10.98.202.85 <none> 8123/TCP,9000/TCP,9009/TCP 11h
service/clickhouse-operator-metrics ClusterIP 10.109.90.202 <none> 8888/TCP 11h
service/clickhouse-test-anywhere-6 ClusterIP 10.100.48.57 <none> 8443/TCP,9440/TCP 11h
service/zookeeper-5244 ClusterIP 10.101.71.82 <none> 2181/TCP,7000/TCP 11h
service/zookeepers-5244 ClusterIP None <none> 2888/TCP,3888/TCP 11h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/clickhouse-operator 1/1 11 11h
NAME DESIRED CURRENT READY AGE
replicaset.apps/clickhouse-operator-996785fc 111 11h
NAME READY AGE
statefulset.apps/chi-test-anywhere-6-test-anywhere-6-0-0 1/1 11h
statefulset.apps/zookeeper-5244 1/1 11h
Optional K9S Commands
Similar to Watch, but in color and in a smaller interactive window, K9S is a free utility that lets you monitor in real time the progress of a provisioning installation.
To open monitoring windows for each altinity-cloud namespaces, open a new terminal instance and run the k9s command:
Figure 4 - The k9s monitoring windows for the two namespaces altinity-cloud-system and altinity-cloud-managed-clickhouse listing each node name, IP address, and the run status.
Environment Dashboard
When provisioning is complete and the connection is established, the ACM displays the dashboard page showing the green connected icon. Since there is no cluster yet, the dashboard shows zeros for the number of Nodes and Clusters.
Figure 5 - The Environments dashboard screen shows you a snapshot of your Minikube server configuration, including the green connected status.
Listing Namespaces
To verify the presence of the new namespaces on your Minikube server, open a third terminal window and list the namespaces to show the two altinity-cloud additions:
kubectl get ns
Response
Note the two new altinity-cloud namespaces at the top:
NAME STATUS AGE
altinity-cloud-managed-clickhouse Active 8h
altinity-cloud-system Active 8h
default Active 16d
kube-node-lease Active 16d
kube-public Active 16d
kube-system Active 16d
Creating a ClickHouse Cluster
These instructions run through the use of the Altinity.Cloud Manager (ACM) Clusters > LAUNCH CLUSTER wizard to create a ClickHouse cluster running in a Minikube Kubernetes environment. The Cluster dashboard in Figure 6 shows the finished result.
Figure 6 - The Clusters dashboard screen showing your new cluster on your Minikube server created by the Altinity Cloud Manager.
NOTE: The Cluster Launch Wizard lets you navigate back and forth between the previously filled-in screens by selecting the title links on the left, or using the BACK and NEXT buttons.
Protocols: Binary Protocol (port:9440) - is checked ON
Protocols: HTTP Protocol (port:8443) - is checked ON
Datadog integration = disabled
IP restrictions = OFF (Enabled is unchecked)
In step 5 Uptime Schedule screen, select ALWAYS ON then NEXT:
In the final screen step 6 Review & Launch, select the green LAUNCH button.
Your new ClickHouse Cluster will start building inside your Minikube. When the cluster is finished building and running, the cluster dashboard appears, similar to the screenshot shown in Figure 6. Beside your cluster name, two green status boxes nodes online, and checks passed appear.
Creating a Database and Running Queries
In this section, you will create tables on your cluster using the ACM and run queries from both the ACM and then from your local terminal.
Testing your database on ACM
To create a new database on your Altinity.Cloud Anywhere cluster from the ACM:
Login to the ACM and select Clusters, then select EXPLORE on your cluster.
In the Query text box, enter the following create table SQL query:
This section shows you how to use your local Minikube computer terminal to log into your Clickhouse Cluster that ACM created.
NOTE: With Minikube, you cannot use your cluster Connection Details strings to directly run clickhouse-client commands, you must first log into the ClickHouse pod as described in the following steps.
Find your pod name:
kubectl -n altinity-cloud-managed-clickhouse get all
# ResponseNAME READY STATUS RESTARTS AGE
pod/chi-test-anywhere-6-johndoe-anywhere-6-0-0-0 2/2 Running 8(3h25m ago) 2d17h
On your Minikube computer terminal, log into that pod using the name you got from step 1:
Log into your ClickHouse database using the clickhouse-client command to get the :) happy face prompt:
clickhouse@chi-test-anywhere-6-johndoe-anywhere-6-0-0-0:/$
clickhouse@chi-test-anywhere-6-johndoe-anywhere-6-0-0-0:/$ clickhouse-client
# Response<jemalloc>: MADV_DONTNEED does not work (memset will be used instead)<jemalloc>: (This is the expected behavior if you are running under QEMU)ClickHouse client version 22.8.13.21.altinitystable (altinity build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 22.8.13 revision 54460.
test-anywhere-6 :)
Run a show tables SQL command:
test-anywhere-6 :) show tables
# ResponseSHOW TABLES
Query id: da01133d-0130-4b98-9090-4ebc6fa4b568
┌─name─────────┐
│ events │
│ events_local │
└──────────────┘
2 rows in set. Elapsed: 0.013 sec.
Run the following SQL query to show data in the events table:
test-anywhere-6 :) SELECT * FROM events;# ResponseSELECT *
FROM events
Query id: 00fef876-e9b0-44b1-b768-9e662eda0483
┌─event_date─┬─event_type─┬─article_id─┬─title───┐
│ 2023-03-24 │ 1 │ 13 │ Example │
└────────────┴────────────┴────────────┴─────────┘
1 row in set. Elapsed: 0.023 sec.
test-anywhere-6 :)
Exiting from ClickHouse client and your pod
To leave the ClickHouse client, enter the exit command.
To leave the pod and return to the Linux prompt enter the exit command again.
Verify you are at the command prompt by entering a linux command such as pwd (print working directory) to see what directory you are currently in.
This section provides a few commonly used Minikube maintenance operations.
Rescaling a cluster
Use the Altinity Cloud Manager Actions 》Rescale to change the CPU, Node Storage, Volumes, and Number of Shards and Replicas.
From the list of Clusters, select a running cluster.
Select the menu ACTIONS > Rescale item.
In the Rescale Cluster window, adjust the following settings as needed:
Desired Cluster Size > Number of Shards
Desired Cluster Size > Number of Replicas
Desired Node Size > Node Type
Desired Node Storage (GB) > (integer: example 50)
Number of Volumes > (integer: example 2)
Select OK, then CONFIRM at the Rescale Confirmation window.
Confirm that the new values appear in your cluster dashboard panel.
Note that cluster Node Storage size may not be decreased, only increased by at least 10%.
Resetting Altinity.Cloud Anywhere
Reset your Altinity.Cloud Anywhere cluster from the ACM and your Minikube installation to create a new Altinity.Cloud Anywhere connection.
To use the Reset Anywhere function.
In the ACM, select Environments from the left-hand navigation pane.
From the environment menu located beside your login name at the top right of the ACM, select your environment name.
In the ACTION menu, select Reset Anywhere.
The result is that you will see the Anywhere Connection Setup screen and provisioning wizard that shows you the connection string to copy and paste to deploy a new Anywhere environment.
Deleting a cluster
Deletion steps involve the ACM and the server hosting your cluster. If necessary, first Reset Anywhere.
From the Altinity Cloud Manager:
In the Clusters section, select from your cluster menu ACTIONS > Destroy.
At the Delete Cluster confirmation dialog box, type in the name of your cluster (example-cluster) and select OK.
From the Environments section, select your Environment Name link.
Select the menu ACTIONS > Reset Anywhere.
To list the ClickHouse namespaces, delete your Kubernetes managed environments from your server and run the following commands:
(NOTE: Make sure you have run the minikube start command first. )
# List the namespaceskubectl get ns
# Delete the following in this orderkubectl -n altinity-cloud-managed-clickhouse delete chi --all
kubectl delete ns altinity-cloud-managed-clickhouse
kubectl delete ns altinity-cloud-system
Deleting a pod
Deleting a pod may be necessary if it is not starting up.
Problem
One of the pods won’t start.
(Example: see line 3 edge-proxy-66d44f7465-lxjjn)
This page shows how the Altinity Cloud Manager with an Altinity.Anywhere installation to remotely rescale a customer’s on-prem cluster.
The Ubuntu host the Kubernetes installation is installed on shows the various commmands use to verify the changes made from the ACM.
Figure 1 - Selecting Actions > Rescale from the cluster to modify.
Figure 2 - Changing the number of Shards from 1 to 2.
Figure 3 - Rescale confirmation.
Figure 4 - Nodes in the process of rescaling.
Rescaling a cluster on the ACM
Select a cluster, then use the Actions > Rescale menu to bring up the Rescale Cluster window, then in the Desired Cluster Size, change the Number of Shards from 1 to 2 then press OK, then CONFIRM.
The nodes online pill box will show grey, 2/4 nodes online, then after several minutes, turn green showing 4/4 nodes online. If you do not see the grey 2/4 nodes online, and the nodes online is green and shows 2/2 nodes online, try the rescale operation again.
Ubuntu Kubernetes Commands
Ubuntu command kubectl -n altinity-cloud-managed-clickhouse... showing the Altinity clusters before the rescale operation.
Figure 5 - Kubernetes command kubectl -n <Altinity cluster name> running on-prem, that is managed from also by the ACM.
Figure 6 - The newly added nodes …-demo-1-0-0 after the rescale operation are now listed, showing Pending.
Ubuntu command kubectl get nodes before the rescale operation.
Figure 7 - Kubernetes command kubectl get nodes shows all the nodes on the Altinity ClickHouse cluster.
Figure 8 - The pending node is added as 192.168.149.238 and is spinning up.
The newly spun up shard in cluster-x now reads 4/4 nodes online.
Figure 9 - The Altinity Cloud Manager showing the remotely managed cluster-y with 4/4 nodes online.
4.5 -
20 March 2023 · Read time 1 min
Before installing Altinity.Cloud Anywhere into your environment, verify that the following requirements are met.
Security Requirements
Have a current Altinity.Cloud account.
An Altinity.Cloud API token. For more details, see Account Settings.
Altinity.Cloud connect (altinitycloud-connect) is a tunneling daemon for Altinity.Cloud.
It enables management of ClickHouse clusters through Altinity.Cloud Anywhere.
Required permissions
altinitycloud-connect requires following permissions:
Open outbound ports:
443 tcp/udp (egress; stateful)
Kubernetes permissions:
cluster-admin for initial provisioning only, it can be revoked afterwards
full access to ‘altinity-cloud-system’ and ‘altinity-cloud-managed-clickhouse’ namespaces and a few optional read-only cluster-level permissions (for observability)
altinitycloud-connect login produces cloud-connect.pem used to connect to
Altinity.Cloud Anywhere control plane (--token is short-lived while cloud-connect.pem does not expire until revoked).
If you need to reconnect the environment in unattended/batch mode (i.e. without requesting the token),
you can do so via
Disconnecting your environment from Altinity.Cloud
Locate your environment in the Environment tab in your Altinity.Cloud account.
Select ACTIONS->Delete.
Toggle the Delete Clusters switch only if you want to delete managed clusters.
Press OK to complete.
After this is complete Altinity.Cloud will no longer be able to see or
connect to your Kubernetes environment via the connector.
Cleaning up managed environments in Kubernetes
To clean up managed ClickHouse installations and namespaces in a
disconnected Kubernetes cluster, issue the following commands in the
exact order shown below.
If you delete the namespaces before deleting the ClickHouse installations
(chi) the operation will hang due to missing finalizers on chi resources.
Should this occur, issue kubectl edit commands on each ClickHouse
installation and remove the finalizer manually from the resource
specification. Here is an example.
In order for Altinity.Cloud Anywhere to gather/store/query logs, you need to configure access to S3 or GCS bucket.
Cloud-specific instructions provided below.