Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
August 30, 2021 12:19 pm GMT

Clickhouse Server - Troubleshooting

Abstract

  • When we get clickhouse performance issue like high CPU usage, in order to investigate what is the real problem and how to solve or provide workaround, we need to understand Clickhouse system/user config attributes.

Table Of Contents

max_part_loading_threads

The maximum number of threads that read parts when ClickHouse starts.

  • Possible values:
    Any positive integer.
    Default value: auto (number of CPU cores).

  • During startup ClickHouse reads all parts of all tables (reads files with metadata of parts) to build a list of all parts in memory. In some systems with a large number of parts this process can take a long time, and this time might be shortened by increasing max_part_loading_threads (if this process is not CPU and disk I/O bound).

  • Query check

SELECT *FROM system.merge_tree_settingsWHERE name = 'max_part_loading_threads'Query id: 5f8c7c7a-5dec-4e89-88dc-71f06d800e04namevaluechangeddescriptiontype max_part_loading_threads  'auto(4)'        0  The number of threads to load data parts at startup.  MaxThreads 1 rows in set. Elapsed: 0.003 sec. 

max_part_removal_threads

The number of threads for concurrent removal of inactive data parts. One is usually enough, but in Google Compute Environment SSD Persistent Disks file removal (unlink) operation is extraordinarily slow and you probably have to increase this number (recommended is up to 16).

number_of_free_entries_in_pool_to_execute_mutation

  • This attribute must be align with background_pool_size, its values must be <= value of background_pool_size
SELECT *FROM system.merge_tree_settingsWHERE name = 'number_of_free_entries_in_pool_to_execute_mutation'namevaluechangeddescription number_of_free_entries_in_pool_to_execute_mutation  10           0  When there is less than specified number of free entries in pool, do not execute part mutations. This is to leave free threads for regular merges and avoid "Too many parts" 

background_pool_size

  • background_pool_size
    Sets the number of threads performing background operations in table engines (for example, merges in MergeTree engine tables). This setting is applied from thedefault profile at the ClickHouse server start and cant be changed in a user session. By adjusting this setting, you manage CPU and disk load. Smaller pool sizeutilizes less CPU and disk resources, but background processes advance slower which might eventually impact query performance.

  • Before changing it, please also take a look at related MergeTree settings, such as number_of_free_entries_in_pool_to_lower_max_size_of_merge andnumber_of_free_entries_in_pool_to_execute_mutation.

  • Possible values:
    Any positive integer.
    Default value: 16.

  • Start log

2021.08.29 04:22:30.824446 [ 12372 ] {} <Information> BackgroundSchedulePool/BgSchPool: Create BackgroundSchedulePool with 16 threads2021.08.29 04:22:47.891697 [ 12363 ] {} <Information> Application: Available RAM: 15.08 GiB; physical cores: 4; logical cores: 8.
  • How to update this value eg. 5

Update config.xml

      <merge_tree>        <number_of_free_entries_in_pool_to_execute_mutation>5</number_of_free_entries_in_pool_to_execute_mutation>      </merge_tree>

Update users.xml

    <profiles>        <default>            <background_pool_size>5</background_pool_size>        </default>    </profiles>

background_schedule_pool_size

  • background_schedule_pool_size
    Sets the number of threads performing background tasks for replicated tables, Kafka streaming, DNS cache updates. This setting is applied at ClickHouse server start and cant be changed in a user session.

  • Possible values:
    Any positive integer.
    Default value: 128.

  • How to update this value? - At user profile -> update users.xml (disable background_schedule_pool_size if we don't use ReplicatedMergeTree engine)

    <profiles>      <default>          <background_schedule_pool_size>0</background_schedule_pool_size>      </default>    </profiles>
  • Get pool size
SELECT    name,    valueFROM system.settingsWHERE name LIKE '%pool%'namevalue connection_pool_max_wait_ms                   0      distributed_connections_pool_size             1024   background_buffer_flush_schedule_pool_size    16     background_pool_size                          100    background_move_pool_size                     8      background_fetches_pool_size                  8      background_schedule_pool_size                 0      background_message_broker_schedule_pool_size  16     background_distributed_schedule_pool_size     16    
  • Get background pool task
SELECT    metric,    valueFROM system.metricsWHERE metric LIKE 'Background%'metricvalue BackgroundPoolTask                           0  BackgroundFetchesPoolTask                    0  BackgroundMovePoolTask                       0  BackgroundSchedulePoolTask                   0  BackgroundBufferFlushSchedulePoolTask        0  BackgroundDistributedSchedulePoolTask        0  BackgroundMessageBrokerSchedulePoolTask      0 
  • Get BgSchPool
# ps H -o 'tid comm' $(pidof -s clickhouse-server) |  tail -n +2 | awk '{ printf("%s%s
", $1, $2) }' | grep BgSchPool7346 BgSchPool/D
SELECT     cluster,     shard_num,     replica_num,     host_nameFROM system.clustersclustershard_numreplica_numhost_name test_cluster_two_shards                    1            1  127.0.0.1  test_cluster_two_shards                    2            1  127.0.0.2  test_cluster_two_shards_localhost          1            1  localhost  test_cluster_two_shards_localhost          2            1  localhost  test_shard_localhost                       1            1  localhost  test_shard_localhost_secure                1            1  localhost  test_unavailable_shard                     1            1  localhost  test_unavailable_shard                     2            1  localhost 

max_threads

max_threads

  • The maximum number of query processing threads, excluding threads for retrieving data from remote servers (see the max_distributed_connections parameter).

  • This parameter applies to threads that perform the same stages of the query processing pipeline in parallel.
    For example, when reading from a table, if it is possible to evaluate expressions with functions, filter with WHERE and pre-aggregate for GROUP BY in parallel using at least max_threads number of threads, then max_threads are used.

  • Default value: the number of physical CPU cores.

  • For queries that are completed quickly because of a LIMIT, you can set a lower max_threads. For example, if the necessary number of entries are located in every block and max_threads = 8, then 8 blocks are retrieved, although it would have been enough to read just one.

  • The smaller the max_threads value, the less memory is consumed.

  • Update this value at user profile

Get tables size

clickhouse-get-tables-size.sql

select concat(database, '.', table)                         as table,       formatReadableSize(sum(bytes))                       as size,       sum(rows)                                            as rows,       max(modification_time)                               as latest_modification,       sum(bytes)                                           as bytes_size,       any(engine)                                          as engine,       formatReadableSize(sum(primary_key_bytes_in_memory)) as primary_keys_sizefrom system.partswhere activegroup by database, tableorder by bytes_size desc
  • For table detail of database
select parts.*,       columns.compressed_size,       columns.uncompressed_size,       columns.ratiofrom (         select table,                formatReadableSize(sum(data_uncompressed_bytes))          AS uncompressed_size,                formatReadableSize(sum(data_compressed_bytes))            AS compressed_size,                sum(data_compressed_bytes) / sum(data_uncompressed_bytes) AS ratio         from system.columns         where database = currentDatabase()         group by table         ) columns         right join (    select table,           sum(rows)                                            as rows,           max(modification_time)                               as latest_modification,           formatReadableSize(sum(bytes))                       as disk_size,           formatReadableSize(sum(primary_key_bytes_in_memory)) as primary_keys_size,           any(engine)                                          as engine,           sum(bytes)                                           as bytes_size    from system.parts    where active and database = currentDatabase()    group by database, table    ) parts on columns.table = parts.tableorder by parts.bytes_size desc;

Understand clickhouse Compression

Enable allow_introspection_functions for query profiling

        <default>            <allow_introspection_functions>1</allow_introspection_functions>        </default>
  • Get thread stack trace
WITH arrayMap(x -> demangle(addressToSymbol(x)), trace) AS allSELECT    thread_id,    query_id,    arrayStringConcat(all, '
') AS resFROM system.stack_traceWHERE res LIKE '%SchedulePool%'thread_idquery_idres 7346 pthread_cond_waitDB::BackgroundSchedulePool::delayExecutionThreadFunction()ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>)start_threadclone

parts_to_throw_insert

  • parts_to_throw_insertIf the number of active parts in a single partition exceeds the parts_to_throw_insert value, INSERT is interrupted with the Too many parts (N). Merges are processing significantly slower than inserts exception.

Possible values:
Any positive integer.
Default value: 300.

To achieve maximum performance of SELECT queries, it is necessary to minimize the number of parts processed, see Merge Tree.

You can set a larger value to 600 (1200), this will reduce the probability of the Too many parts error, but at the same time SELECT performance might degrade. Also in case of a merge issue (for example, due to insufficient disk space) you will notice it later than it could be with the original 300.

2021.08.30 11:30:44.526367 [ 7369 ] {} <Error> void DB::SystemLog<DB::MetricLogElement>::flushImpl(const std::vector<LogElement> &, uint64_t) [LogElement = DB::MetricLogElement]: Code: 252, e.displayText() = DB::Exception: Too many parts (300). Parts cleaning are processing significantly slower than inserts, Stack trace (when copying this message, always include the lines below):
  • And you decide to increase parts_to_throw_insert -> Update config.xml
    <merge_tree>         <parts_to_throw_insert>600</parts_to_throw_insert>    </merge_tree>
.ltag__user__id__512906 .follow-action-button { background-color: #000000 !important; color: #62df88 !important; border-color: #000000 !important; }
vumdao image

Original Link: https://dev.to/vumdao/clickhouse-server-troubleshooting-2gb7

Share this article:    Share on Facebook
View Full Article

Dev To

An online community for sharing and discovering great ideas, having debates, and making friends

More About this Source Visit Dev To