Altinity Stable Build for ClickHouse 22.3

A few months ago we certified 21.8 as an Altinity Stable release. It was delivered together with the Altinity Stable build for ClickHouse. Since then many things have happened to ClickHouse. In Altinity we continued to work on newer releases and run them in-house. We completed several new features, and many more have been added by community contributors. We started testing the new ClickHouse LTS release 22.3 as soon as it was out in late March. It took us more than three months to confirm 22.3 is ready for production use and to make sure upgrades go smoothly. As of 22.3.8 we are confident in certifying 22.3 as an Altinity Stable release.

This release is a significant upgrade since the previous Altinity Stable release. It includes more than 3000 pull requests from 415 contributors. Please look below for detailed release notes; read these carefully before upgrading. There are additional notes for point releases.

Major new features in 22.3 since the previous stable release 21.8

A new release introduces a lot of changes and new functions. It is very hard to pick the most essential ones, so refer to the full list in the Appendix. The following major features are worth mentioning on the front page:

  • SQL features:
    • User defined functions as lambda expressions.
    • User defined functions as external executables.
    • Schema inference for INSERT and SELECT from external data sources.
    • -Map combinator for Map data type.
    • WINDOW VIEW for stream processing (experimental). See our blog article “Battle of Views” comparing it to LIVE VIEW.
    • EPHEMERAL columns.
    • Asynchronous inserts.
    • Support expressions in JOIN ON.
    • OPTIMIZE DEDUPLICATE on a subset of columns a).
    • COMMENT on schema objects a).
  • Security features:
    • Predefined named connections (or named collections) for external data sources. Can be used in table functions, dictionaries, table engines.
    • system.session_log table that tracks connections and login attempts a).
    • Disk-level encryption. See the meetup presentation for many interesting details.
    • Support server-side encryption keys for S3 a).
    • Support authentication of users connected via SSL by their X.509 certificate.
  • Replication and Cluster improvements:
    • ClickHouse Keeper – in-process ZooKeeper replacement – has been graduated to production-ready by the ClickHouse team. We also keep testing it on our side: the core functionality looks good and stable, some operational issues and edge cases still exist.
    • Automatic replica discovery – no need to alter remote_servers anymore.
    • Parallel reading from multiple replicas (experimental).
  • Dictionary features:
    • Array attributes, Nullable attributes.
    • New hashed_array dictionary layout that reduces RAM usage for big dictionaries (idea proposed by Altinity).
    • Custom queries for dictionary source.
  • Integrations:
    • SQLite table engine, table function, database engine.
    • Executable function, storage engine and dictionary source.
    • FileLog table engine.
    • Huawei OBS storage support.
    • Aliyun OSS storage support.
  • Remote file system and object storage features:
    • Zero-copy replication for HDFS.
    • Partitioned writes into S3 a), File, URL and HDFS storages.
    • Local data cache for remote filesystems.
  • Other:
    • Store user access management data in ZooKeeper.
    • Production ready ARM support.
    • Significant rework of clickhouse-local that is now as advanced as clickhouse-client .
    • Projections and window functions are graduated and not experimental anymore.

As usual with ClickHouse, there are many performance and operational improvements in different server components.

a) Contributed by Altinity developers.

Backward Incompatible Changes

The following changes are backward incompatible and require user attention during an upgrade:

  • Do not output trailing zeros in text representation of Decimal types. Example: 1.23 will be printed instead of 1.230000 for decimal with scale 6. Serialization in output formats can be controlled with the setting output_format_decimal_trailing_zeros.
  • Now, scalar subquery always returns a Nullable result if its type can be Nullable.
  • Introduce syntax for here documents. Example: SELECT $doc$ VALUE $doc$. This change is backward incompatible if there are identifiers that contain $.
  • Now indices can handle Nullable types, including isNull and isNotNull functions. This required index file format change – idx2 file extension. ClickHouse 21.8 can not read those files, so a correct downgrade may not be possible.
  • MergeTree table-level settings replicated_max_parallel_sends, replicated_max_parallel_sends_for_table, replicated_max_parallel_fetches, replicated_max_parallel_fetches_for_table were replaced with max_replicated_fetches_network_bandwidth, max_replicated_sends_network_bandwidth and background_fetches_pool_size.
  • Change the order of json_path and json arguments in SQL/JSON functions to be consistent with the standard.
  • A “leader election” mechanism is removed from ReplicatedMergeTree, because multiple leaders have been supported since 20.6. If you are upgrading from ClickHouse version older than 20.6, and some replica with an old version is a leader, then the server will fail to start after upgrade. Stop replicas with the old version to make the new version start. Downgrading to versions older than 20.6 is not possible.
  • Change implementation specific behavior on overflow of the function toDateTime. It will be saturated to the nearest min/max supported instant of datetime instead of wraparound. This change is highlighted as “backward incompatible” because someone may unintentionally rely on the old behavior.

Upgrade Notes

There were several changes between versions that may affect the rolling upgrade of big clusters. Upgrading only part of the cluster is not recommended.

  • Data after merge is not byte-identical for tables with MINMAX indexes.
  • Data after merge is not byte-identical for tables created with the old syntax - count.txt is added in 22.1

Rolling upgrade from 20.4 and older is impossible because the “leader election” mechanism is removed from ReplicatedMergeTree.

Known Issues in 22.3.x

The development team continues to improve the quality of the 22.3 release. The following issues still exist in the 22.3.8 version and may affect ClickHouse operation. Please inspect them carefully to decide if those are applicable to your applications.

If you started using 22.3 from the earlier versions, please note that following important bugs have been fixed, especially related to PREWHERE functionality:

You may also look into a GitHub issues using v22.3-affected label.​​

Other Important Changes

  • ClickHouse now recreates system tables if the Engine definition is different from the one in the config. (see + edge case

  • Behavior of some metrics has been changed. For example written_rows / result_rows may be reported differently, see BackgroundPoolTask was split into BackgroundMergesAndMutationsPoolTask and BackgroundCommonPoolTask.

  • New setting background_merges_mutations_concurrency_ratio=2 – that means ClickHouse can schedule two times more merges/mutations than background_pool_size, which is still 16 by default. For some scenarios with stale replicas the behavior may be harder to predict / explain. If needed, you can return the old behavior by setting background_merges_mutations_concurrency_ratio=1.

  • Pool sizes should be now configured in config.xml (Fallback reading that from default profile of users.xml is still exists) .

  • In queries like SELECT a, b, … GROUP BY (a, b, …) ClickHouse does not untuple the GROUP BY expression anymore. The same is true for ORDER BY and PARTITION BY.

  • When dropping and renaming schema objects ClickHouse now checks dependencies and throws an Exception if the operation may break the dependency:

    Code: 630. DB::Exception: Cannot drop or rename X because some tables depend on it: Y

    It may be disabled by setting:


ClickHouse embedded monitoring is since 21.8. It now collects host level metrics, and stores them every second in the table system.asynchronious_metric_log. This can be visible as an increase of background writes, storage usage, etc. To return to the old rate of metrics refresh / flush, adjust those settings in config.xml:


Alternatively, metric_log and asynchronous_metric_log tables can be completely disabled:

    <asynchronous_metric_log remove="1"/>
    <metric_log remove="1"/>

Some new ClickHouse features are now enabled by default. It may lead to a change in behavior, so review those carefully and disable features that may affect your system:

  • check_table_dependencies
  • empty_result_for_aggregation_by_constant_keys_on_empty_set
  • http_skip_not_found_url_for_globs
  • input_format_allow_seeks
  • input_format_csv_empty_as_default
  • input_format_with_types_use_header
  • log_query_views
  • optimize_distributed_group_by_sharding_key
  • remote_filesystem_read_prefetch
  • remote_fs_enable_cache
  • use_hedged_requests
  • use_local_cache_for_remote_storage
  • use_skip_indexes
  • wait_for_async_insert

In the previous releases we recommended disabling optimize_on_insert. This recommendation stays for 22.3 as well as inserts into Summing and AggregatingMergeTree can slow down.

Changes Compared to Community Build

ClickHouse Altinity Stable builds are based on the community LTS versions. Altinity.Stable is based on community, but we have additionally backported several features we were working on for our clients:

Let’s Install!

ClickHouse Altinity Stable releases are based on the community versions.

Linux packages can be found at for community builds, and at for Altinity Stable builds.

Note: The naming schema for Altinity.Stable build packages has been changed since 21.8.x.

21.8.x 22.3.x
<package>_<ver>.altinitystable_all.deb <package>_<ver>.altinitystable_amd64.deb
<package>-<ver>.altinitystable-2.noarch.rpm <package>-<ver>.altinitystable.x86_64.rpm

Docker images for community versions have been moved from ‘yandex’ to ‘clickhouse’ organization, and should be referenced as ‘clickhouse/clickhouse-server:22.3’. Altinity stable build images are available as ‘altinity/clickhouse-server:22.3’.

Mac users are welcome to use Homebrew Formulae. Ready-to-use bottles are available for both M1 and Intel Macs running Monterey.

For more information on installing ClickHouse from either the Altinity Builds or the Community Builds, see the ClickHouse Altinity Stable Release Build Install Guide.

Please contact us at if you experience any issues with the upgrade.


New table functions

  • executable
  • format
  • hdfsCluster
  • hive
  • sqlite

New table engines

  • Executable
  • ExecutablePool
  • FileLog
  • Hive
  • SQLite
  • WindowView

New aggregate function combinators

  • -Map

New functions

  • bayesAB REMOVED
renames (geometrics functions)
  • maxMap -> maxMappedArrays
  • minMap -> minMappedArrays
  • sumMap -> sumMappedArrays
  • readWktMultiPolygon -> readWKTMultiPolygon
  • readWktPoint -> readWKTPoint
  • readWktPolygon -> readWKTPolygon
  • readWktRing -> readWKTRing
  • svg -> SVG
  • Basic / SQL compatibility
    • left + leftUTF8
    • right + rightUTF8
  • JSON
    • JSONExtractKeys
  • NLP / text split
    • stem
    • lemmatize
    • ngrams
    • synonyms
    • splitByNonAlpha
    • splitByWhitespace
    • tokens
  • text classification
    • detectCharset
    • detectLanguage
    • detectLanguageMixed
    • detectLanguageUnknown
    • detectProgrammingLanguage
    • detectTonality
  • UTF8
    • normalizeUTF8NFC
    • normalizeUTF8NFD
    • normalizeUTF8NFKC
    • normalizeUTF8NFKD
  • bitwise string processing
    • bitSlice
  • subBitmap
  • MD4
  • SHA384
  • SHA512
  • arrayLast
  • arrayLastIndex
  • singleValueOrNull (if all values are equal it return the value, Null otherwise)
  • sparkbar
  • addressToLineWithInlines
  • currentProfiles
  • currentRoles
  • defaultProfiles
  • defaultRoles
  • enabledProfiles
  • enabledRoles
  • initialQueryID
  • initial_query_id
  • queryID
  • serverUUID
  • shardCount
  • shardNum
  • zookeeperSessionUptim
  • getOSKernelVersion
  • getServerPort
  • query_id
  • monthName
snowflake identifiers
  • snowflakeToDateTime
  • snowflakeToDateTime64
  • dateTime64ToSnowflake
  • dateTimeToSnowflake
  • decodeURLFormComponent
  • encodeURLComponent
  • encodeURLFormComponent
  • degrees
  • radians
  • h3
    • h3CellAreaM2
    • h3CellAreaRads2
    • h3EdgeLengthKm
    • h3ExactEdgeLengthKm
    • h3ExactEdgeLengthM
    • h3ExactEdgeLengthRads
    • h3GetFaces
    • h3HexAreaKm2
    • h3IsPentagon
    • h3IsResClassIII
    • h3NumHexagons
    • h3ToCenterChild
    • h3ToGeo
    • h3ToGeoBoundary
  • s2
    • geoToS2
    • s2CapContains
    • s2CapUnion
    • s2CellsIntersect
    • s2GetNeighbors
    • s2RectAdd
    • s2RectContains
    • s2RectIntersection
    • s2RectUnion
    • s2ToGeo

multidimensional vectors


  • L1 space

    • distanceL1
    • normL1
    • normalizeL1
    • L1Distance
    • L1Norm
    • L1Normalize
  • L2 space

    • distanceL2
    • normL2
    • normalizeL2
    • L2Distance
    • L2Norm
    • L2Normalize
  • Linf space

    • distanceLinf
    • normLinf
    • normalizeLinf
    • LinfDistance
    • LinfNorm
    • LinfNormalize
  • Lp space

    • distanceLp
    • normLp
    • normalizeLp
    • LpDistance
    • LpNorm
    • LpNormalize
  • Math

    • tuplePlus
    • tupleMinus
    • tupleNegate
    • tupleDivide
    • tupleDivideByNumber
    • tupleMultiply
    • tupleMultiplyByNumber
    • vectorSum
    • vectorDifference
    • max2
    • min2
    • dotProduct
    • scalarProduct
    • cosineDistance
  • mapApply
  • mapFilter
  • mapContainsKeyLike
  • mapExtractKeyLike
  • mapUpdate
aggregation & statistical functions
  • meanZTest
  • proportionsZTest
  • theilsU
  • cramersV
  • cramersVBiasCorrected
  • contingency
  • exponentialMovingAverage
  • exponentialTimeDecayedAvg
  • exponentialTimeDecayedCount
  • exponentialTimeDecayedMax
  • exponentialTimeDecayedSum
  • quantilesBFloat16Weighted
  • medianBFloat16Weighted
  • quantileBFloat16Weighted
  • nothing (Aggregate function that takes arbitrary number of arbitrary arguments and does nothing. :) )
  • tumble
  • tumbleEnd
  • tumbleStart
  • hop
  • hopEnd
  • hopStart
  • windowID


  • orDefault variants

    • IPv4StringToNumOrDefault
    • IPv6StringToNumOrDefault
    • toDateOrDefault
    • toDateTime64OrDefault
    • toDateTimeOrDefault
    • toDecimal128OrDefault
    • toDecimal256OrDefault
    • toDecimal32OrDefault
    • toDecimal64OrDefault
    • toFloat32OrDefault
    • toFloat64OrDefault
    • toIPv4OrDefault
    • toIPv6OrDefault
    • toInt128OrDefault
    • toInt16OrDefault
    • toInt256OrDefault
    • toInt32OrDefault
    • toInt64OrDefault
    • toInt8OrDefault
    • toUInt16OrDefault
    • toUInt256OrDefault
    • toUInt32OrDefault
    • toUInt64OrDefault
    • toUInt8OrDefault
    • toUUIDOrDefault
  • orNull casting variants

    • IPv4StringToNumOrNull
    • IPv6StringToNumOrNull
    • toIPv4OrNull
    • toIPv6OrNull
  • new types

    • toBool
    • toDate32
    • toDate32OrDefault
    • toDate32OrNull
    • toDate32OrZero
  • tuple to key-value

    • tupleToNameValuePairs
  • accurate cast

    • accurateCastOrDefault
    • accurate_CastOrNull
    • _CAST

New system tables

  • asynchronous_inserts
  • rocksdb
  • session_log
  • warnings

New columns in system tables

  • columns:
    • character_octet_length, numeric_precision, numeric_precision_radix, numeric_scale, datetime_precision
  • data_skipping_indices:
    • data_compressed_bytes, data_uncompressed_bytes, marks
  • databases:
    • comment
  • dictionaries:
    • comment
  • distributed_ddl_queue:
    • entry_version, initiator_host, initiator_port, settings, query_create_time, host, exception_text
  • functions:
    • create_query, origin
  • graphite_retentions:
    • Rule_type
  • part_moves_between_shards:
    • dst_part_name, rollback
  • parts:
    • secondary_indices_compressed_bytes, secondary_indices_uncompressed_bytes, secondary_indices_marks_bytes, projections
  • parts_columns:
    • serialization_kind, subcolumns.names, subcolumns.types, subcolumns.serializations
  • processes:
    • disributed_depth
  • query_log:
    • formatted_query, views, distributed_depth
  • query_thread_log:
    • distributed_depth
  • replicas:
    • last_queue_update_exception, replica_is_active
  • tables:
    • as_select, has_own_data, loading_dependencies_database, loading_dependencies_table, loading_dependent_database, loading_dependent_table
  • users:
    • default_database

New metrics and events

  • CompiledExpressionCacheBytesREMOVED
  • CompiledExpressionCacheCountREMOVED
  • ActiveAsyncDrainedConnections
  • ActiveSyncDrainedConnections
  • AsyncDrainedConnections
  • AsynchronousReadWait
  • BackgroundCommonPoolTask
  • BackgroundMergesAndMutationsPoolTask
  • BackgroundPoolTaskREMOVED
  • MaxPushedDDLEntryID
  • PartsActive
  • PartsPreActive
  • PendingAsyncInsert
  • SyncDrainedConnections

Also, please refer to the release notes from the development team available at the following URLs:


Release Notes

Based on upstream/v22.3.15.33-lts

Changes Compared to Community Build v22.3.15.33-lts

Bug fix
  • Fix for exponential time decaying window functions. Now respecting boundaries of the window. #36944 by @excitoon
  • Fixes for objects removal in S3ObjectStorage #37882 by @excitoon
  • Fixed Unknown identifier (aggregate-function) exception #39762 by @quickhouse
  • Fixed point of origin for exponential decay window functions #39593 by @quickhouse
  • Fix unused unknown columns introduced by WITH statement #39131 by @amosbird
  • Fix memory leak while pushing to MVs w/o query context (from Kafka/…) by @azat
  • Fix ArrowColumn dictionary conversion to LowCardinality strings. #40037 by #arthurpassos
  • Optimized processing of ORDER BY in window functions #34632 by @excitoon
  • Support batch_delete capability for GCS #37659 by @frew
  • Add support for extended (chunked) arrays for Parquet format. #40485 by @arthurpassos

Changes in upstream from v22.3.12.19-lts to v22.3.15.33-lts

New Feature
Bug fix
  • Fixed primary key analysis with conditions involving toString(enum). #43596 (Nikita Taranov).
  • Fixed queries with SAMPLE BY with prewhere optimization on tables using Merge engine. #43315 (Antonio Andelic).
  • This closes #42453. #42573 (Alexey Milovidov).
  • Choose correct aggregation method for LowCardinality with BigInt. #42342 (Duc Canh Le).
  • Fix a bug with projections and the aggregate_functions_null_for_empty setting. This bug is very rare and appears only if you enable the aggregate_functions_null_for_empty setting in the server’s config. This closes #41647. #42198 (Alexey Milovidov).
  • Fix possible crash in SELECT from Merge table with enabled optimize_monotonous_functions_in_order_by setting. Fixes #41269. #41740 (Nikolai Kochetov).
  • Fix possible pipeline stuck exception for queries with OFFSET. The error was found with enable_optimize_predicate_expression = 0 and always false condition in WHERE. Fixes #41383. #41588 (Nikolai Kochetov).
  • Writing data in Apache ORC format might lead to a buffer overrun. #41458 (Alexey Milovidov).
  • The aggregate function categorialInformationValue was having incorrectly defined properties, which might cause a null pointer dereferencing at runtime. This closes #41443. #41449 (Alexey Milovidov).
  • Malicious data in Native format might cause a crash. #41441 (Alexey Milovidov).
  • Add column type check before UUID insertion in MsgPack format. #41309 (Kruglov Pavel).
  • Queries with OFFSET clause in subquery and WHERE clause in outer query might return incorrect result, it’s fixed. Fixes #40416. #41280 (Alexander Tokmakov).
  • Fix possible segfaults, use-heap-after-free and memory leak in aggregate function combinators. Closes #40848. #41083 (Kruglov Pavel).
  • Fix memory leak while pushing to MVs w/o query context (from Kafka/…). #40732 (Azat Khuzhin).
  • Fix potential dataloss due to a bug in AWS SDK ( Bug can be triggered only when clickhouse is used over S3. #40506 (alesapin).
  • Proxy resolver stop on first successful request to endpoint. #40353 (Maksim Kita).
  • Fix rare bug with column TTL for MergeTree engines family: In case of repeated vertical merge the error Cannot unlink file ColumnName.bin ... No such file or directory. could happen. #40346 (alesapin).
  • Fix potential deadlock in WriteBufferFromS3 during task scheduling failure. #40070 (Maksim Kita).
  • Update simdjson. This fixes #38621. #38838 (Alexey Milovidov).
  • Fix cast lowcard of nullable in JoinSwitcher, close #37385. #37453 (Vladimir C).

Full list of changes

Available at:


Release Notes

Based on upstream/v22.3.12.19-lts

Changes Compared to Community Build v22.3.12.19-lts

Bug fix
  • Fix for exponential time decaying window functions. Now respecting boundaries of the window #36944 by @excitoon (via #164)
  • Fixes for objects removal in S3ObjectStorage #37882 by @excitoon (via #164)
  • Fixed Unknown identifier (aggregate-function) exception #39762 by @quickhouse (via #189)
  • Fixed point of origin for exponential decay window functions #39593 by @quickhouse (via #190)
  • Fix unused unknown columns introduced by WITH statement #39131 by @amosbird
  • Fix memory leak while pushing to MVs w/o query context (from Kafka/…) by @azat
  • Optimized processing of ORDER BY in window functions #34632 by @excitoon (via #164)
  • Support batch_delete capability for GCS #37659 by @frew (via #164)
  • Add support for extended (chunked) arrays for Parquet format #40485 by @arthurpassos
Build/Testing/Packaging Improvement
  • Build/Testing/Packaging Improvement: Allow Github workflows to run on Altinity’s infrastructure

Changes in upstream from v22.3.10.22-lts to v22.3.12.19-lts

Bug fix
  • Fix clickhouse-server #40681 by @Felixoid.
  • fix heap buffer overflow by limiting http chunk size. #40292 by @CheSema.
  • Fix possible segfault in CapnProto input format’. #40241 by @Avogar.
  • Fix insufficient argument check for encryption functions’. #40194 by @alexey-milovidov.
  • Fix HashMethodOneNumber with const column. #40020 by @canhld94.
  • Fix seeking while reading from encrypted disk. #39687 by @vitlibar.
  • Fix number of threads for pushing to views’. #39253 by @azat.
  • collectFilesToSkip() in MutateTask now supports new index file extension .idx2 for MinMax. #40122 by @robot-ch-test-poll2.
  • Clean out clickhouse-server.service from /etc. #39323 by @Felixoid.

Full list of changes

Available at:


Release Notes

Changes from ClickHouse v22.3.10.22-LTS

Changes Compared to Community Build v22.3.10.22-lts:

Bug Fixes
  • Fix for exponential time decaying window functions. Now respecting boundaries of the window #36944 by @excitoon (via #164)
  • Fixes for objects removal in S3ObjectStorage #37882 by @excitoon (via #164)
  • Fixed Unknown identifier (aggregate-function) exception #39762 by @quickhouse (via #189)
  • Fixed point of origin for exponential decay window functions #39593 by @quickhouse (via #190)
  • Reverted ‘Fix seeking while reading from encrypted disk’, known to cause issues in 22.3 (via #
  • Reverted: ‘Enable using constants from with and select in aggregate function parameters.’, causes problems on 22.3 (via #194)
  • Improvement: Optimized processing of ORDER BY in window functions #34632 by @excitoon (via #164)
  • Improvement: Support batch_delete capability for GCS #37659 by @frew (via #164)
Build/Testing/Packaging Improvements
  • Build/Testing/Packaging Improvement: Allow Github workflows to run on Altinity’s infrastructure

Changes from v22.3.8.39-lts to v22.3.10.22-lts

Bug Fixes


Full list of changes

Available at: