
One of the goals of the hosting provider is the maximum possible utilization of existing equipment to provide quality service to end users. The resources of the end servers are always limited, but the number of hosted client services, and in our case, a VPS, may vary significantly. How to get on the Christmas tree and eat a burger, read under the cut.
Sealing a VPS on a node so that clients do not feel this at all greatly helps to improve the economic performance of any hosting provider. Of course, the node should not crack at the seams if it is crammed with containers to the eyeballs, and any surge in the load is immediately felt by all customers.
')
How many VPS can be placed on one node depends on many factors, such obvious as:- Characteristics of the iron itself node
- VPS size
- The nature of the load on the VPS
- Software technologies to help optimize density
In this case, we will share the experience of using Pfcache technology for Virtuozzo.
We use the 6th branch, but everything said is true for the 7th.
Pfcache is a Virtuozzo engine that helps to de-duplicate IOPS and RAM in containers by allocating identical files in containers into a separate common area.
In fact, it consists of:- Kernel code
- Daemon user-space
- User-space utilities
On the side of the node, we select a whole section in which files will be created, which will be directly used by all the VPS on the node. In this section, the block device is mounted ploop. Further, when starting the container, it receives a reference to this section:
[root@pcs13 ~]# cat /proc/mounts ... /dev/ploop62124p1 /vz/pfcache ext4 rw,relatime,barrier=1,data=ordered,balloon_ino=12 0 0 ... /dev/ploop22927p1 /vz/root/418 ext4 rw,relatime,barrier=1,data=ordered,balloon_ino=12,pfcache_csum,pfcache=/vz/pfcache 0 0 /dev/ploop29642p1 /vz/root/264 ext4 rw,relatime,barrier=1,data=ordered,balloon_ino=12,pfcache_csum,pfcache=/vz/pfcache 0 0 ...
Here are the approximate statistics of the number of files on one of our nodes:
[root@pcs13 ~]# find /vz/pfcache -type f | wc -l 45851 [root@pcs13 ~]# du -sck -h /vz/pfcache 2.4G /vz/pfcache 2.4G total
The principle of pfcache is as follows:- User-space daemon Pfcached writes the sha-1 file hash to the xattr attribute of this file. Files are processed not all, but only in directories / usr, / bin, / usr / sbin, / sbin, / lib, / lib64
- Most likely, the files in these directories will be “shared” and will be used by several containers;
- Pfcached periodically collects statistics reading files from the kernel, analyzes it, and adds files to the cache, if their use is frequent;
- These directories can be different, and are configured in configuration files.
- When reading a file, it is checked whether it contains the specified hash in the extended xattr attributes. If contains, the “shared” file is opened, instead of the container file. This substitution occurs unnoticed by the container code, and is hidden in the kernel;
- When writing to a file, the hash is invalid. Thus, the next time you open it, the container file itself will be opened, and not its cache.
Keeping common files from / vz / pfcache in page cache, we achieve saving of this cache, as well as saving of IOPS. Instead of reading ten files from the disk, we read one that goes to the page cache immediately.
struct inode { ... struct file *i_peer_file; ... }; struct address_space { ... struct list_head i_peer_list; ... }
The VMA list for the file remains the same (deduplicating the memory) and reading it from the disk less often (saving iops). Our "obshchak" posted on the SSD - an additional gain in speed.
An example for caching the / bin / bash file:
[root@pcs13 ~]# ls -li /vz/root/2388/bin/bash 524650 -rwxr-xr-x 1 root root 1021112 Oct 7 2018 /vz/root/2388/bin/bash [root@pcs13 ~]# pfcache dump /vz/root/2388 | grep 524650 8e3aa19fdc42e87659746f6dc8ea3af74ab30362 i:524650 g:1357611108 f:CP [root@pcs13 ~]# sha1sum /vz/root/2388/bin/bash 8e3aa19fdc42e87659746f6dc8ea3af74ab30362 /vz/root/2388/bin/bash [root@pcs13 /]# getfattr -ntrusted.pfcache /vz/root/2388/bin/bash # file: vz/root/2388/bin/bash trusted.pfcache="8e3aa19fdc42e87659746f6dc8ea3af74ab30362" [root@pcs13 ~]# sha1sum /vz/pfcache/8e/3aa19fdc42e87659746f6dc8ea3af74ab30362 8e3aa19fdc42e87659746f6dc8ea3af74ab30362 /vz/pfcache/8e/3aa19fdc42e87659746f6dc8ea3af74ab30362
Efficiency is calculated using a
ready-made script .
This script goes through all the containers on the node, calculating the cached files of each container.
[root@pcs16 ~]# /pcs/distr/pfcache-examine.pl ... Pfcache cache uses 831 MB of memory Total use of pfcached files in containers is 39837 MB of memory Pfcache effectiveness: 39006 MB
Thus, from memory we save about 40 gigabytes of files in containers, they will be loaded from the cache.
In order for this mechanism to work even better, it is necessary to place the most “identical” VPS on the node. For example, those for which the user has no root access and for which the environment from the deployed image is configured.
You can pncache work through the config file
/etc/vz/pfcache.conf
MINSIZE, MAXSIZE - minimum / maximum file size for caching
TIMEOUT - timeout between caching attempts
A complete list of parameters can be found
at the link .