
Hi, Habr! I present to you the translation of the original
PostgreSQL benchmark on FreeBSD, CentOS, Ubuntu Debian and openSUSE by Martin Kováčik. It examines PostgreSQL 10.1 DBMS tests in real-world environments on different unix systems.
Transfer
In this post, I'm going to show the test results of the recently released PostgreSQL 10.1. I checked the database on these OS (all 64-bit):
- Ubuntu 16.04 , kernel 4.10.0-38-generic
- openSUSE 42.3, kernel 4.4.87-25-default
- CentOS 7.4 , kernel 3.10.0-693.2.2.el7.x86_64
- Debian 9.2 , kernel 4.9.0-4-amd64
- FreeBSD 11.1
Testing Methodology
The purpose of the test was to measure PostgreSQL performance under conditions similar to (typical) production deployments:
')
- clients connect through the connection pool to ensure that there is no constant reconnection to the database (I did not use the connection pool, instead I did not use the -C pgbench flag)
- clients connect over the network, not through a unix socket
- The PostgreSQL data directory is on a RAID 1 mirror.
For each of the tested OSs, a control database of ~ 74 GB was created:
pgbench -i -s 5000 pgbench
The test infrastructure consisted of two dedicated servers connected to a 1 Gbps network:
- EX41-SSD: Intel i7-6700, 4 cores, 8 threads, 32 GB RAM DDR4, used to generate SQL queries using pgbench
- PX121-SSD: Intel Xeon E5-1650 v3, 6 cores, 12 streams, 256 GB RAM DDR4 ECC, 2 x 480 GB SATA 6 Gb / s, data center SSD series, used as PostgreSQL server
I was interested in these test combinations:
- 32 GB read only : read only test (only samples with no data changes), the dataset does not fit into the PostgreSQL cache
- 200 GB read only : read only test, data set is placed into PostgreSQL cache
- 32 GB TCP-B : read-write dataset does not fit into the PostgreSQL cache
- TCP-B 200 GB : read, write, set of data is placed in the PostgreSQL cache
pgbench setup
The pgbench version 10.1 program running on a separate FreeBSD 11.1 computer was used to generate the load. The test script consisted of three parts: vacuum + warming up, test only reading and test reading and writing. Before each test, the pgbench read-write tables were cleared (the -v flag was used). During the test, I gradually increased the number of clients accessing the database.
PostgreSQL server settings
For Linux distributions, PostgreSQL was installed in the ext4 file system in the RAID1 configuration (software RAID using mdraid) on two SSDs with
atime disabled. In the case of FreeBSD, the OpenZFS file system was used on two SSDs when configuring RAID1. A ZFS dataset with PostgreSQL data was created with the following parameters:
zfs get recordsize,logbias,primarycache,atime,compression zroot/var/db/postgres NAME PROPERTY VALUE SOURCE zroot/var/db/postgres recordsize 8K local zroot/var/db/postgres logbias throughput local zroot/var/db/postgres primarycache all default zroot/var/db/postgres atime off inherited from zroot zroot/var/db/postgres compression lz4 local
The PostgreSQL server configuration was the same on all operating systems except for file paths (each OS uses its own directory structure). The contents of the
postgresql.conf file (basic settings) for a 32 GB instance:
autovacuum = off default_statistics_target = 100 maintenance_work_mem = 1GB checkpoint_completion_target = 0.9 effective_cache_size = 24GB work_mem = 104MB wal_buffers = 16MB shared_buffers = 8GB max_connections = 300
The contents of the
postgresql.conf file for a 200 GB instance:
autovacuum = off default_statistics_target = 100 maintenance_work_mem = 2GB checkpoint_completion_target = 0.9 effective_cache_size = 144GB work_mem = 640MB wal_buffers = 16MB shared_buffers = 48GB max_connections = 300
Comparative Testing
I tested PostgreSQL on five different operating systems in two modes - read only and TCP-B (read-write) with two different memory profiles. Each OS test took about 30 hours (not counting the time required to configure the OS). The results of each pgbench run were saved for later evaluation.
Results - Read Only




Results - TCP-B




Results
The test showed that the difference between different distributions of GNU / Linux is not very significant. The best operating system in the read-only test was openSUSE 42.3, while FreeBSD worked about 40% slower. Unfortunately, I did not find out what caused this mediocre performance of FreeBSD.
A more realistic picture of PostgreSQL performance was obtained in a read-write test (TCP-B). Among the GNU / Linux distributions, Centos 7.4 was the fastest, and Debian 9.2 was the slowest. I was pleasantly surprised by FreeBSD 11.1, which worked more than twice as fast as the best Linux, despite the fact that FreeBSD used ZFS, which is the copy-on-write file system. I assumed that this difference was caused by the cost of software RAID in Linux, so I did three more TCP-B tests for 100 concurrent clients, this time without software RAID:
- FreeBSD 11.1 + UFS : 5623.86 TPS
- FreeBSD 11.1 + ZFS : 8331.85 TPS
- CentOS 7.4 + ext4 : 8987.65 TPS
The results show the inefficiency of Linux SW RAID (or the effectiveness of ZFS RAID). The performance of CentOS 7.4 without SW RAID is only slightly higher than that of FreeBSD 11.1 with ZFS RAID (for TCP-B and 100 simultaneous clients).