Often, when discussing various ways of virtualization, Virtuozzo supporters (usually hosters on OpenVZ) recall what they once heard about, and somewhere, the statement “Xen slows down when working with a disk”. This fallacy has its roots in the radically different disk caching mechanisms of Xen virtual machines and Virtuoso containers. As a result, performance characteristics of a disk system are very different under different conditions. But delusion settles in the mind tightly and for a long time.
To close the Xen disk “disk” topic and show that there are no brakes, here are the results of unixbench, bonnie ++ and the packaging of the Linux kernel source on the same machine, on the same disk partition.
Processor: Intel® Core (TM) 2 Quad CPU Q6600 @ 2.40GHz. The disk is some kind of SATA Samsung.
')
Native - measured on a physical machine: 1 CPU, 256 Mb RAM. Kernel: 2.6.18-164.6.1.el5
Xen PV - measured on a Xen virtual machine in para-virtualization mode: 1 CPU, 256 Mb RAM. DomU kernel: 2.6.18-164.el5xen. Dom0 kernel: 2.6.18-164.el5xen. The disk in the virtual machine is given as phy.
Unixbench.
Very synthetic test, especially for the disk, but it is often liked to use in arguments. Cutting out what is related to the disk:
Native
File Copy 1024 bufsize 2000 maxblocks 3960.0 529094.5 1336.1
File Copy 256 bufsize 500 maxblocks 1655.0 153098.5 925.1
File Copy 4096 bufsize 8000 maxblocks 5800.0 1208281.0 2083.2
Xen pv
File Copy 1024 bufsize 2000 maxblocks 3960.0 542862.3 1370.9
File Copy 256 bufsize 500 maxblocks 1655.0 153684.5 928.6
File Copy 4096 bufsize 8000 maxblocks 5800.0 1212533.2 2090.6
Bold text highlights the final numbers - the more the better. It can be seen that on the physical hardware, and in the virtual machine, the numbers are almost equal, in the virtual machine it is even slightly larger. It would seem that a violation of the law of energy conservation, but it is simply explained - a small part of the load (about a percent) is taken over by an I / O subsystem, which is located outside the virtual machine, in dom0 and runs on a different core.
bonnie ++
Native
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
dev.home 1G 575 99 64203 13 29238 5 1726 96 68316 6 144.5 1
Latency 14923us 1197ms 483ms 60674us 16858us 541ms
Version 1.96 ------Sequential Create------ --------Random Create--------
dev.home -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
256 47219 67 304464 100 23795 31 51813 73 378017 100 6970 9
Latency 575ms 846us 673ms 416ms 22us 1408ms
Xen pv
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
CentOS_5_4 1G 815 99 65675 4 29532 0 1739 92 68298 0 134.1 0
Latency 10028us 200ms 242ms 122ms 15356us 627ms
Version 1.96 ------Sequential Create------ --------Random Create--------
CentOS_5_4 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
256 53015 61 325596 99 25157 23 58020 68 404162 99 6050 5
Latency 605ms 771us 686ms 406ms 49us 2121ms
A more versatile assessment, but again, a bit strange result - in some cases, again, Xen PV is faster.
Archiving
And you can look at the result of a normal, real task. The packaging of the Linux kernel sources * is a task with intensive reading of the disk. The total size is about 320 MB, almost 24 thousand files. Before packing the disk cache was cleared through vm.drop_caches.
Native
$ time (find linux-2.6.26 | cpio -o > /dev/null)
530862 blocks
real 0m30.247s
user 0m0.605s
sys 0m2.411s
Xen PV
$ time (find linux-2.6.26 | cpio -o > /dev/null)
530862 blocks
real 0m32.396s
user 0m0.052s
sys 0m0.120s
The difference in time is slightly less than 7%, a fairly ordinary overhead projector for virtualization. This is the loss of performance that applies to most disk patterns. If you have a problem rested on the disk, plus or minus 7% will not significantly change the situation.
* Using cpio instead of tar comes from the fact that tar is so clever that it detects the output in / dev / null and turns on the dry run and does not archive anything.