One of the problems that haunts us is that we spend too much time on the abstract (internal) aspects of work. The past months we have been working intensively - but clients almost did not see the results of work, because internal components of the cloud were copied and adapted to high loads (in thousands of operations per second).

Finally, the hands reached even simple things - we implemented in the interface the ability to increase the size of the disks (including the system one).
The real work - two hours in the socket, a few more hours to check that everything works as expected. But - not enough time and hands. Finally, time was found done.
')
How to do it?
The disk should not be locked, that is, either disconnected from the virtual machine, or the virtual machine itself should be turned off. (If you don’t have the button “increase” in the cloud control panel - refresh the page - cached JS).
The option to resize disks is in the “Disks” section for connected disks, and in the “non-connected disks” section for non-connected ones. Disk size is specified in megabytes. The limit for disk size is 1.7TB, the total number of disks that can be connected to the machine is 15 pcs.
Important: the disk grows as a block device - we do not get access to the clients' file systems and do not resize the partitions.
After increasing the disk size you need:
In the case of a partition table:
- Resize the partition table. For example, with help. cfdisk If you change the size of the system partition, you will have to reboot. In addition, you may need to delete the swap partition and recreate it.
- Make resize2fs a device name (for example, resize2fs / dev / sda1)
In the case of lvm:
- pvresize to PV (if you are using LVM over a partition table, then resize the partition table)
- lvresize
- resize2fs
And reduce?
Unfortunately no. There are several reasons, the main one: it is dangerous. It is very dangerous. If you cut the disc - then from the end. And without looking at the file system. With a chance to tightly lose data (because due to the peculiarities of blktap, data on a cut piece is lost forever).
The second - resizing the file system down is not a task for the faint of heart (especially the root file system), plus it is a huge load on the disk (which will cost so much that making 2-3 full copies of the disk will be cheaper).
Fragmentation?
It occurs, but, due to the PE size of the LVM we used, is very insignificant - the minimum piece will be 4 MB in length. Plus, from the point of view of our storage, we still have competitive requests to different storage locations, so there should be no difference in performance.
Money?
The operation of changing the size of the device is very simple and we do not take money for it. The new disk begins to account with the new size. Since we take into account the place in TB * hour (with real resolution in seconds), then, in fact, the consumption of disk space simply “increases”, that is, from the point of view of accounting, practically nothing changes.