The post tells about my
unsuccessful performance test, and also shows a couple of
incorrect numbers of
ARDB performance with
LMDB embedded database in Amazon EC2 containers.
Where the legs grow
The project expects that from time to time it will be necessary to write thousands of rows in the database for the minimum time. Of course, I don’t want to load the main database, after a little digging I liked
LMDB . A
ARDB is a wrapper that allows you to reach the last one as to Redis.
Unfortunately I could not find the performance tests of this miracle in Amazon ec2, so I decided to check it myself
DISCLAIMER
The post is not about choosing NoSQL database ... Only one database was tested on different configurations
')
Equipment and installation
Tests were performed in 4 configurations.
- Two t2.micro instances (within the framework of Free Tier) - instances are weak when there are credits - they give 100% of one CPU, the basic performance is 10% CPU
- GP2 SSD volume - the slowest SSD, base speed - 100 IOPS,
( but can give up to 300,000 IOPS from time to time ) - IO2 SSD volume + 3000 Provisioned IOPS - guaranteed 3000 disk operations per second
- IO2 SSD volume + 6000 Provisioned IOPS - guaranteed 6000 disk operations per second
- i3.large db + m4.xlarge USER - i3.large instance has a dedicated NVMe SSD, which is very fast
All components for tests compiled from source.
Typical installation scenario:
yum install git yum install gcc git checkout git://smth cd smth make
Errors
The main mistake was not planning ... there was only an idea. Also, I did not take into account the features of Amazon that most services give a burst at the start of active use, and then the performance decreases, this refers to:
- Disk speeds
- Network speeds
- Processor speeds for higher performance instances
- Very unlikely to RAM, but everything can be
As a result of the wrong choice of instances - often the CPU of the server being tested was not fully loaded
Ideally, it would be worth simulating the real way of use, but as always there is no time ...
Without taking into account all these points, all the figures below are only indicative, synthetics gentlemen.
Measurements
Small data set
The data set is modest, I did not want to wait, but yes, I wanted to get an assessment.
Number of keys: 1,000,000 keys (read 650k records in the database)
Customers: 50
Record size: 3,000 bytes
Tested by several consecutive commands:
Characteristic | t2.micro | i3.large |
GP2 | IO1 |
3000 PIOPS | 6000 piops |
Server price | $ 10 | $ 10 | $ 10 | $ 130 |
Disc price | $ 1 | $ 9 ** | $ 18 ** | $ 1 |
PIOPS price | - | $ 195 | $ 390 | - |
Price, total | $ 11 | $ 214 | $ 418 | $ 131 |
CPU | 10-100% | 10-100% | 10-100% | 200% |
Ram | 1GB | 1GB | 1GB | 16GB |
Iops | 100, up to 300,000 in burst | 3,000 | 6,000 | 100,000 |
Record, constant load | 700 (disk throttled) 1,700 (CPU throttled)
| 1,300 | 2,300 | 27,000 *** |
Record, burst | 2,700 | 10,000 | > 10,000 | |
Reading from 100k kit, stable load | <4,000 | 11,000 * | 21,000 * | |
Reading from a 100k set, burst | 35,000 | | | |
Reading from 1m set, stable load | <4,000 | 3,000 | 5,000 | 43,000 *, *** |
Reading from a 1m dial, burst | 6,000 | | | |
* all data is placed in RAM
** Amazon limit, no more than 50 PIOPS per gigabyte
*** read "no less than", the bottleneck in these tests is the transfer rate from the server where the redis-benchmark was launched
i3.large
Here testing was conducted on different sizes of the database.
Characteristic | DB size, keys |
100k | 1m | 10m | 30m |
Read speed | 41,407 | 42,977 | 43,220 | 17.286 * |
Delay reading, up to 1ms | 60.14% | 62.34% | 60.27% | 2.88% |
Delay reading up to 5ms | 99.97% | 100.00% ** | 99.99% | 99.16% |
Maximum read delay | 6ms ** | 3ms ** | 13ms ** | 14ms ** |
Write speed | 34,831 | 26,911 | 15.967 * | 10,353 * |
Delay recording, up to 1ms | 11.96% | 8.66% | 5.22% | 2.88% |
Delay recording, up to 5ms | 99.53% | 97.50% | 96.15% | 82.65% |
Delay recording, up to 50ms | 100% | 99.99% | 99.74% | 99.68% |
Record delay up to 100ms | 100%** | 99.99% | 99.84% | 99.75% |
Recording delay up to 300ms | 100%** | 99.99% | 99.94% | 99.87% |
Delay recording up to 500ms | 100%** | 100% | 99.96% | 99.91% |
Maximum recording delay | 17ms ** | 604ms | 3104ms | 5059ms |
* read "no less than", the bottleneck in these tests is the transfer rate from the server where the redis-benchmark was launched
** results are thrown, for which they are detained exactly for 200ms for an unclear reason, blame the Amazon environment
findings
- A good performance for a t2.micro instance with a gp2 disk, for $ 11 a month, you can get a database that stably processes 1,000 write requests per second, and from time to time gives up to 3,000 WPS which is enough for many applications
- Theoretically, the performance of ARDB + LMDB on a record when there are already a million records in the database can be considered as `diskIOPS / 3`
- IO1 drives with Provisioned IOPS are not justified, it is much cheaper to take an optimized instance with a local SSD
- For i3.large instance - decent numbers (for $ 130 per month)
Thank you, I hope my time will be useful to someone
for free .