We have 2 HN: node1.srv.my and node2.srv.my. On node1.srv.my we have a container (service.srv.my) that has grown out of the old node and wants to grow further.
We want to reduce the maximum downtime of the container. How much can this be done?
Answer: Depending on the tasks of the VE, there may be no downtime at all.

For example, I have a web service (LNAMP) on service.srv.my with a highly visited site and a group of filler editors. They have access to the site “on record”. In general, this is the only category of service users who will notice the move. Okay. Let's get started
We will use the vzmigrate utility.
1. Temporarily enable root access on the node2.srv.my server in the sshd config:
PermitRootLogin yes - performed on node2.srv.my
')
2. Create keys on node1.srv.my:
ssh-keygen -t rsa -b 4096 - executed on node1.srv.my
We select an empty password so that they do not even ask us the password when entering by key.
3. Put the key in node2.srv.my:
ssh-copy-id -i ~ / .ssh / id_rsa.pub root@node2.srv.my - executed on node1.srv.my
They will be asked to enter the root node2.srv.my root password for validation.
4. Update the DNS for the service.srv.my zone if the IP address changes. (In this case, it is better to reduce the TTL - minutes to 15.)
5. Now we boldly start the migration:
vzmigrate -v --remove-area no --online --rsync = "- v" host_target_ip veid_service.srv.my - runs on node1.srv.my
Depending on the size and speed of the network between the nodes, it can take a long time. It may be worth using screen if connection breaks are likely.
--online will allow relocation without ceasing to provide services service.srv.my
--rsync = "- v" - rsync parameters, here you can configure whether to show files that will be transferred, whether to show the progress of files.
--remove-area no allows you to save files regardless of the success of the transfer. To my surprise, it still did not prevent the VE from stopping. Probably I didn’t read something. If you need to be able to quickly cancel the migration, select –remove-area no and rename the configuration file veid.conf.migrated to veid.conf. If you need to delete files set yes.
host_target_ip is the IP address of node2.srv.my
veid_service.srv.my - service VEID, for example, 301 for me.
Sometimes problems arise due to different iptables settings (http://phpsuxx.blogspot.com/2010/08/openvz-error-most-probably-some.html)6. vzlist on node2.srv.my shows that container 301 now works here, on node1.srv.me vzlist shows that there is no container anymore (but we saved the files just in case)
7. We may need to change the IP service.srv.my. In this case, we change the IP in the config and restart VE (vzctl restart 301). Alternatively, by changing the IP address by using --ipdel and --ipdel.
Moving over
People write that when moving the network is not lost: for example, the backup data from VE could merge to the backup server when moving, smoothly moving from one node to another. But did not check.

For a long time did not write on Habré. Now I will correct this situation.