Our clients often ask which of the data storage types supported by
VMmanager is the best, the fastest, and which one to choose in their case. Unfortunately, to answer this question just will not work.
Therefore, we decided to test the performance of data warehouses.

In early March 2013, ISPsystem
announced the release of a new virtualization management software product -
VMmanager . The solution is adapted both for hosting virtual machines and for building a cloud.
VMmanager supports several types of data storage. The term “data storage” means the description of the storage method of virtual disks. These types are divided into network and local. Network include:
iSCSI ,
NFS and
RBD . To local -
LVM and
DIR (file system).
- iSCSI is a remote storage management protocol that allows data to be sent over long distances using SCSI commands sent over an IP network.
- NFS - network storage. It is much slower than local, it is recommended to be used only for placing additional virtual disks.
- RDB is also known as the Ceph RADOS block device. This is a distributed file system. On Habré there was an article-guide on the installation of Ceph FS with brief help from the user ilaskov .
- LVM is the best solution for use as a primary local repository.
- File system - the server's file system will be used as storage. Images of virtual disks are stored as RAW files.
Each of the presented has its own advantages and disadvantages. For example, when using network storage, when it comes to migrating a virtual machine, there is no need to transfer the image to another node in the cluster.
')
As the most accessible and common, we decided to take local
LVM data warehouses,
DIR (with
RAW format and
Qcow2 format) to answer the question: which of the local storages is the fastest when reading or writing.
For reference:
- RAW is a complete image of a block device without any internal format.
- Qcow2 is the QEMU disk image format. This is an abbreviation of the format Copy-On-Write (copying in the process of writing).
For testing, a server of this configuration was chosen:
- CPU: Intel Core2Quad CPU Q6600
- RAM: 8GB
- HDD: Adaptec ASR3405 4x150GB SAS RAID-10
- OS: Centos 6
The testing itself was carried out using the
fio application, which was described in the article by
amaramo ,
“How to measure disk performance” . The utility is available on the
official website in the form of source codes.
We used the
EPEL repository to set up
Centos :
Separate testing of both reading and writing was conducted. The number of
IOPS was chosen as the key value.
The
fio parameters were as follows:
Reading Testing:
[readtest]blocksize = 128kfilename = / dev / vdbrw = randreaddirect = 1buffered = 0ioengine = libaioiodepth = 16Recording Testing:
[writetest]blocksize = 128ksize = 100%filename = / dev / vdbrw = randwritedirect = 1buffered = 0ioengine = libaioiodepth = 16Each test was run three times. Before each launch, the disk was filled with “zeros” using the
dd application:
- dd if = / dev / zero of = / dev / vdb bs = 4096 conv = noerror
and the operating system disk cache was flushed:
- echo 3> / proc / sys / vm / drop_caches
Since the testing was performed on the same physical array where the operating system is installed, system applications could affect the values ​​obtained. Therefore, the
median was taken as the resulting value. Since in some launches the value differed by an order of magnitude.
For testing, a virtual disk of 50000 MB was created. The size is not taken by chance. As the most running disk size for virtual servers. All disks were connected to virtual machines using the
virtio driver. Paravirtual drivers (
virtio ) are used when working with the main devices in a virtual environment. Using these drivers generally improves the performance of the virtual disk subsystem. These drivers are present in the
Linux kernel from version 2.6.25.
In the first test method,
sda4 was used (on which later both
LVM and disks for virtual machines were created) with the addition in the
fio parameters
- size = 50G .
For testing with the second method, the
LVM volume group was created, and in this group the
LVM partition is 500000 MB in size, as for a virtual machine.
When testing the
DIR storage with
RAW and
Qcow2 formats
, the disk images themselves were located as files in the
ext4 file system in the
/ vm directory.
In the fifth method of testing, one pass was made and the disk was not filled with “zeros”. Since I was interested in performance just after creating the snapshot of the file system.
Result tableNo | Storage Type | Test result for reading, IOPS | Record test result, IOPS |
one | Read / write on physical server | 843 | 331 |
2 | LVM partition on a physical server | 848 | 559 |
3 | DIR (RAW) Virtual Machine | 668 | 545 |
four | DIR virtual machine (Qcow2) | 621 | 463 |
five | DIR virtual machine (Qcow2) + snapshot | 615 | 56 |
6 | LVM virtual machine | 854 | 557 |
Based on the data table, you can come to the following conclusions.
- The fastest is the use of LVM storage.
- But at the same time, the RAW storage is the easiest to carry out various operations on it: resizing, changing disk layout, running to another virtualization without conversion, or converting to Qcow2 format .
- The advantage of Qcow2 is the support of snapshots of the file system , but their use slows down the recording very much. The write speed is reduced due to the fact that waiting for the completion of overwriting of all blocks relative to snapshot occurs.