📜 ⬆️ ⬇️

VMmanager: Comparison of local storage performance

Our clients often ask which of the data storage types supported by VMmanager is the best, the fastest, and which one to choose in their case. Unfortunately, to answer this question just will not work.

Therefore, we decided to test the performance of data warehouses.
image

In early March 2013, ISPsystem announced the release of a new virtualization management software product - VMmanager . The solution is adapted both for hosting virtual machines and for building a cloud.

VMmanager supports several types of data storage. The term “data storage” means the description of the storage method of virtual disks. These types are divided into network and local. Network include: iSCSI , NFS and RBD . To local - LVM and DIR (file system).

Each of the presented has its own advantages and disadvantages. For example, when using network storage, when it comes to migrating a virtual machine, there is no need to transfer the image to another node in the cluster.
')
As the most accessible and common, we decided to take local LVM data warehouses, DIR (with RAW format and Qcow2 format) to answer the question: which of the local storages is the fastest when reading or writing.
For reference:

For testing, a server of this configuration was chosen:

The testing itself was carried out using the fio application, which was described in the article by amaramo , “How to measure disk performance” . The utility is available on the official website in the form of source codes.

We used the EPEL repository to set up Centos :

Separate testing of both reading and writing was conducted. The number of IOPS was chosen as the key value.

The fio parameters were as follows:

Reading Testing:
[readtest]
blocksize = 128k
filename = / dev / vdb
rw = randread
direct = 1
buffered = 0
ioengine = libaio
iodepth = 16

Recording Testing:
[writetest]
blocksize = 128k
size = 100%
filename = / dev / vdb
rw = randwrite
direct = 1
buffered = 0
ioengine = libaio
iodepth = 16

Each test was run three times. Before each launch, the disk was filled with “zeros” using the dd application:

and the operating system disk cache was flushed:


Since the testing was performed on the same physical array where the operating system is installed, system applications could affect the values ​​obtained. Therefore, the median was taken as the resulting value. Since in some launches the value differed by an order of magnitude.

For testing, a virtual disk of 50000 MB was created. The size is not taken by chance. As the most running disk size for virtual servers. All disks were connected to virtual machines using the virtio driver. Paravirtual drivers ( virtio ) are used when working with the main devices in a virtual environment. Using these drivers generally improves the performance of the virtual disk subsystem. These drivers are present in the Linux kernel from version 2.6.25.

In the first test method, sda4 was used (on which later both LVM and disks for virtual machines were created) with the addition in the fio parameters - size = 50G .

For testing with the second method, the LVM volume group was created, and in this group the LVM partition is 500000 MB in size, as for a virtual machine.

When testing the DIR storage with RAW and Qcow2 formats , the disk images themselves were located as files in the ext4 file system in the / vm directory.

In the fifth method of testing, one pass was made and the disk was not filled with “zeros”. Since I was interested in performance just after creating the snapshot of the file system.

Result table

NoStorage TypeTest result for reading, IOPSRecord test result, IOPS
oneRead / write on physical server843331
2LVM partition on a physical server848559
3DIR (RAW) Virtual Machine668545
fourDIR virtual machine (Qcow2)621463
fiveDIR virtual machine (Qcow2) + snapshot61556
6LVM virtual machine854557

Based on the data table, you can come to the following conclusions.

Source: https://habr.com/ru/post/209790/


All Articles