Setting up logging
In order for Altinity.Cloud Anywhere to gather/store/query logs from your ClickHouse® clusters, you need to configure access to an S3 or GCS bucket. Logs are scraped only from Kubernetes nodes that have the label altinity.cloud/node-group: infra
.
Vendor-specific recommendations are below:
EKS (AWS)
The recommended way is to use IRSA.
apiVersion: v1
kind: ServiceAccount
metadata:
name: log-storage
namespace: altinity-cloud-system
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::<aws_account_id>:role/<role_arn>"
Alternatively, you can use a custom Instance Profile or explicit credentials (shown below).
# create bucket
aws s3api create-bucket --bucket REPLACE_WITH_BUCKET_NAME --region REPLACE_WITH_AWS_REGION
# create user with access to the bucket
aws iam create-user --user-name REPLACE_WITH_USER_NAME
aws iam put-user-policy \
--user-name REPLACE_WITH_USER_NAME \
--policy-name REPLACE_WITH_POLICY_NAME \
--policy-document \
'{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket",
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::REPLACE_WITH_BUCKET_NAME",
"arn:aws:s3:::REPLACE_WITH_BUCKET_NAME/*"
],
"Effect": "Allow"
}
]
}'
# generate access key
aws iam create-access-key --user-name REPLACE_WITH_USER_NAME |
jq -r '"AWS_ACCESS_KEY_ID="+(.AccessKey.AccessKeyId)+"\nAWS_SECRET_ACCESS_KEY="+(.AccessKey.SecretAccessKey)+"\n"' > credentials.env
# create altinity-cloud-system/log-storage-aws secret containing AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY
kubectl create secret -n altinity-cloud-system generic log-storage-aws \
--from-env-file=credentials.env
rm -i credentials.env
Use your private customer Slack channel to send the bucket name to Altinity in order to finish configuration.
GKE (GCP)
The recommended way is to use Workload Identity.
apiVersion: v1
kind: ServiceAccount
metadata:
name: log-storage
namespace: altinity-cloud-system
annotations:
iam.gke.io/gcp-service-account: "<gcp_sa_name>@<project_id>.iam.gserviceaccount.com"
Alternatively, you can use GCP service account for instance or explicit credentials (shown below).
# create bucket
gsutil mb gs://REPLACE_WITH_BUCKET_NAME
# create GCP SA with access to the bucket
gcloud iam service-accounts create REPLACE_WITH_GCP_SA_NAME \
--project=REPLACE_WITH_PROJECT_ID \
--display-name "REPLACE_WITH_DISPLAY_NAME"
gsutil iam ch \
serviceAccount:REPLACE_WITH_GCP_SA_NAME@REPLACE_WITH_PROJECT_ID.iam.gserviceaccount.com:roles/storage.admin \
gs://REPLACE_WITH_BUCKET_NAME
# generate GCP SA key
gcloud iam service-accounts keys create credentials.json \
--iam-account=REPLACE_WITH_GCP_SA_NAME@REPLACE_WITH_PROJECT_ID.iam.gserviceaccount.com \
--project=REPLACE_WITH_PROJECT_ID
# create altinity-cloud-system/log-storage-gcp secret containing credentials.json
kubectl create secret -n altinity-cloud-system generic log-storage-gcp \
--from-file=credentials.json
rm -i credentials.json
Use your private customer Slack channel to send the bucket name to Altinity in order to finish configuration.
AKS (Azure)
The recommended way is to use Workload Identity.
apiVersion: v1
kind: ServiceAccount
metadata:
name: log-storage
namespace: altinity-cloud-system
annotations:
azure.workload.identity/client-id: "replace_with_microsoft_entra_application_client_id"
Alternatively, you can use Managed identities or explicit credentials (shown below).
az storage account create --resource-group REPLACE_WITH_RESOURCE_GROUP_NAME \
--name REPLACE_WITH_STORAGE_ACCOUNT_NAME --sku Standard_LRS --kind StorageV2 --location REPLACE_WITH_LOCATION
az storage container create --account-name REPLACE_WITH_STORAGE_ACCOUNT_NAME \
--name REPLACE_WITH_STORAGE_CONTAINER_NAME --auth-mode key --fail-on-exist
# create altinity-cloud-system/log-storage-azure secret containing AZURE_STORAGE_ACCOUNT & AZURE_STORAGE_KEY
kubectl create secret -n altinity-cloud-system generic log-storage-azure \
--from-literal="AZURE_STORAGE_ACCOUNT=REPLACE_WITH_STORAGE_ACCOUNT_NAME" \
--from-literal="AZURE_STORAGE_KEY=$(az storage account keys list --account-name REPLACE_WITH_STORAGE_ACCOUNT_NAME --output tsv --query [0].value))"