📜 ⬆️ ⬇️

PostgreSQL 9.6: Parallelizing Sequential Reads

For a long time, one of the most well-known imperfections of PostgreSQL was the ability to parallelize queries. With the release of version 9.6 this will cease to be a problem. Much work has been done on this issue, and already starting from commit 80558c1 , a parallelization of sequential reading appears, which we will become acquainted with in the course of this article.

image


First, it should be noted: the development of this functionality was carried out continuously and some parameters changed their names between commits. This article was written after the checkout on June 17, and some features described in this article will be present only in version 9.6 of beta2.

Comparing with release 9.5, new parameters were added to the configuration file. Here they are:
')

Let's look at how additional workers can be used to speed up the execution of our requests. Create a test table with an INT field and one hundred million records:

postgres=# CREATE TABLE test (i int); CREATE TABLE postgres=# INSERT INTO test SELECT generate_series(1,100000000); INSERT 0 100000000 postgres=# ANALYSE test; ANALYZE 

PostgreSQL has a max_parallel_workers_per_gather parameter of 2 by default, in which case two workers will be activated during a sequential scan.

Normal sequential scanning does not carry anything new:

 postgres=# EXPLAIN ANALYSE SELECT * FROM test; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------ Seq Scan on test (cost=0.00..1442478.32 rows=100000032 width=4) (actual time=0.081..21051.918 rows=100000000 loops=1) Planning time: 0.077 ms Execution time: 28055.993 ms (3 rows) 

In fact, the presence of the WHERE clause is necessary for parallelization:

 postgres=# EXPLAIN ANALYZE SELECT * FROM test WHERE i=1; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------ Gather (cost=1000.00..964311.60 rows=1 width=4) (actual time=3.381..9799.942 rows=1 loops=1) Workers Planned: 2 Workers Launched: 2 -> Parallel Seq Scan on test (cost=0.00..963311.50 rows=0 width=4) (actual time=6525.595..9791.066 rows=0 loops=3) Filter: (i = 1) Rows Removed by Filter: 33333333 Planning time: 0.130 ms Execution time: 9804.484 ms (8 rows) 

We can go back to the past action and look at the difference in execution, with max_parallel_workers_per_gather set to 0:

 postgres=# SET max_parallel_workers_per_gather TO 0; SET postgres=# EXPLAIN ANALYZE SELECT * FROM test WHERE i=1; QUERY PLAN -------------------------------------------------------------------------------------------------------- Seq Scan on test (cost=0.00..1692478.40 rows=1 width=4) (actual time=0.123..25003.221 rows=1 loops=1) Filter: (i = 1) Rows Removed by Filter: 99999999 Planning time: 0.105 ms Execution time: 25003.263 ms (5 rows) 

2.5 times longer.

The scheduler does not always consider the parallelization of sequential reading to be the best option. If the request is not selective enough and there are many tuples that need to be transferred from the worker to the worker, he may prefer the “classic” sequential scan:

 postgres=# SET max_parallel_workers_per_gather TO 2; SET postgres=# EXPLAIN ANALYZE SELECT * FROM test WHERE i<90000000; QUERY PLAN ---------------------------------------------------------------------------------------------------------------------- Seq Scan on test (cost=0.00..1692478.40 rows=90116088 width=4) (actual time=0.073..31410.276 rows=89999999 loops=1) Filter: (i < 90000000) Rows Removed by Filter: 10000001 Planning time: 0.133 ms Execution time: 37939.401 ms (5 rows) 

In fact, if we try to force the scheduler to use sequential read parallelization, we will get the worst result:

 postgres=# SET parallel_tuple_cost TO 0; SET postgres=# EXPLAIN ANALYZE SELECT * FROM test WHERE i<90000000; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------- Gather (cost=1000.00..964311.50 rows=90116088 width=4) (actual time=0.454..75546.078 rows=89999999 loops=1) Workers Planned: 2 Workers Launched: 2 -> Parallel Seq Scan on test (cost=0.00..1338795.20 rows=37548370 width=4) (actual time=0.088..20294.670 rows=30000000 loops=3) Filter: (i < 90000000) Rows Removed by Filter: 3333334 Planning time: 0.128 ms Execution time: 83423.577 ms (8 rows) 

The number of workers can be increased to max_worker_processes (default: 8). We will restore the value of parallel_tuple_cost and see what happens if we increase max_parallel_workers_per_gather to 8:

 postgres=# SET parallel_tuple_cost TO DEFAULT ; SET postgres=# SET max_parallel_workers_per_gather TO 8; SET postgres=# EXPLAIN ANALYZE SELECT * FROM test WHERE i=1; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------ Gather (cost=1000.00..651811.50 rows=1 width=4) (actual time=3.684..8248.307 rows=1 loops=1) Workers Planned: 6 Workers Launched: 6 -> Parallel Seq Scan on test (cost=0.00..650811.40 rows=0 width=4) (actual time=7053.761..8231.174 rows=0 loops=7) Filter: (i = 1) Rows Removed by Filter: 14285714 Planning time: 0.124 ms Execution time: 8250.461 ms (8 rows) 

Even considering that PostgreSQL can use up to 8 workers, he used only six. This is due to the fact that Postgres also optimizes the number of workers depending on the size of the table and the min_parallel_relation_size parameter. The number of workers available to postgres is based on a geometric progression with a denominator of 3 and a min_parallel_relation_size as a scaling factor. Here is an example. Considering that 8MB is the default parameter:
SizeWorker
<8Mb0
<24Mbone
<72MB2
<216Mb3
<648Mbfour
<1944Mbfive
<5822Mb6
......

The size of our table is 3548MB, respectively, 6 is the maximum number of available workers.

 postgres=# \dt+ test List of relations Schema | Name | Type | Owner | Size | Description --------+------+-------+----------+---------+------------- public | test | table | postgres | 3458 MB | (1 row) 

Finally, I will give a brief demonstration of the improvements made with this patch. By launching our query with a growing number of workers, we get the following results:
SizeWorker
<024767.848 ms
<114855.961 ms
<210415.661 ms
<38041.187 ms
<48090.855 ms
<58082.937 ms
<68061.939 ms

You can see that the runtime is significantly improved until it reaches one third of the original value. It is also easy to explain the fact that we see no improvement when using 6 workers instead of 3: the machine on which the tests were performed has 4 processors, so the results are stable after adding 3 additional workers to the original process.

Finally, PostgreSQL 9.6 has entered a new stage of query parallelization, in which parallelization of sequential reads is only the first excellent result. In addition, in version 9.6, aggregation was parallelized, but this is already a topic for another article that will be released in the coming weeks!

Source: https://habr.com/ru/post/305662/


All Articles