📜 ⬆️ ⬇️

PostgreSQL benchmarking on FreeBSD, CentOS, Ubuntu Debian and openSUSE

image Hi, Habr! I present to you the translation of the original PostgreSQL benchmark on FreeBSD, CentOS, Ubuntu Debian and openSUSE by Martin Kováčik. It examines PostgreSQL 10.1 DBMS tests in real-world environments on different unix systems.

Transfer


In this post, I'm going to show the test results of the recently released PostgreSQL 10.1. I checked the database on these OS (all 64-bit):


Testing Methodology


The purpose of the test was to measure PostgreSQL performance under conditions similar to (typical) production deployments:
')

For each of the tested OSs, a control database of ~ 74 GB was created:

pgbench -i -s 5000 pgbench 

The test infrastructure consisted of two dedicated servers connected to a 1 Gbps network:


I was interested in these test combinations:


pgbench setup


The pgbench version 10.1 program running on a separate FreeBSD 11.1 computer was used to generate the load. The test script consisted of three parts: vacuum + warming up, test only reading and test reading and writing. Before each test, the pgbench read-write tables were cleared (the -v flag was used). During the test, I gradually increased the number of clients accessing the database.

 #!/bin/sh THREADS=8 DURATION=1800 PGIP=192.168.1.120 # warmup pgbench -h ${PGIP} -U pgbench -j ${THREADS} -c 10 -T ${DURATION} -S -v pgbench for clients in 1 10 20 30 40 50 60 70 80 90 100 110 120 do echo "RO ${clients}" pgbench -h ${PGIP} -U pgbench -j ${THREADS} -c ${clients} -T ${DURATION} -S pgbench > pgbench_ro_${clients}.log done for clients in 1 10 20 30 40 50 60 70 80 90 100 110 120 do echo "RW ${clients}" pgbench -h ${PGIP} -U pgbench -j ${THREADS} -c ${clients} -T ${DURATION} -v pgbench > pgbench_rw_${clients}.log done 

PostgreSQL server settings


For Linux distributions, PostgreSQL was installed in the ext4 file system in the RAID1 configuration (software RAID using mdraid) on two SSDs with atime disabled. In the case of FreeBSD, the OpenZFS file system was used on two SSDs when configuring RAID1. A ZFS dataset with PostgreSQL data was created with the following parameters:

 zfs get recordsize,logbias,primarycache,atime,compression zroot/var/db/postgres NAME PROPERTY VALUE SOURCE zroot/var/db/postgres recordsize 8K local zroot/var/db/postgres logbias throughput local zroot/var/db/postgres primarycache all default zroot/var/db/postgres atime off inherited from zroot zroot/var/db/postgres compression lz4 local 

The PostgreSQL server configuration was the same on all operating systems except for file paths (each OS uses its own directory structure). The contents of the postgresql.conf file (basic settings) for a 32 GB instance:

 autovacuum = off default_statistics_target = 100 maintenance_work_mem = 1GB checkpoint_completion_target = 0.9 effective_cache_size = 24GB work_mem = 104MB wal_buffers = 16MB shared_buffers = 8GB max_connections = 300 

The contents of the postgresql.conf file for a 200 GB instance:

 autovacuum = off default_statistics_target = 100 maintenance_work_mem = 2GB checkpoint_completion_target = 0.9 effective_cache_size = 144GB work_mem = 640MB wal_buffers = 16MB shared_buffers = 48GB max_connections = 300 

Comparative Testing


I tested PostgreSQL on five different operating systems in two modes - read only and TCP-B (read-write) with two different memory profiles. Each OS test took about 30 hours (not counting the time required to configure the OS). The results of each pgbench run were saved for later evaluation.

Results - Read Only










Results - TCP-B










Results


The test showed that the difference between different distributions of GNU / Linux is not very significant. The best operating system in the read-only test was openSUSE 42.3, while FreeBSD worked about 40% slower. Unfortunately, I did not find out what caused this mediocre performance of FreeBSD.

A more realistic picture of PostgreSQL performance was obtained in a read-write test (TCP-B). Among the GNU / Linux distributions, Centos 7.4 was the fastest, and Debian 9.2 was the slowest. I was pleasantly surprised by FreeBSD 11.1, which worked more than twice as fast as the best Linux, despite the fact that FreeBSD used ZFS, which is the copy-on-write file system. I assumed that this difference was caused by the cost of software RAID in Linux, so I did three more TCP-B tests for 100 concurrent clients, this time without software RAID:


The results show the inefficiency of Linux SW RAID (or the effectiveness of ZFS RAID). The performance of CentOS 7.4 without SW RAID is only slightly higher than that of FreeBSD 11.1 with ZFS RAID (for TCP-B and 100 simultaneous clients).

Source: https://habr.com/ru/post/458156/


All Articles