📜 ⬆️ ⬇️

Migration from one physical server to another

image


A typical situation, the project starts, the simplest server is taken under it, which works six months, the project grows and asks for a big and angry server.

Usually they put a new OS on a new piece of hardware, raise software, set up, transfer content, databases, etc., change DNS and turn off the old server in two days. It would seem a simple procedure, hundreds of times any sysadmin did it. BUT, in the process, as practice shows, something is forgotten and already on the battle server you need to make edits and settings, haul old crutches and adapt them to a new place.
')
This option is sometimes inevitable, for example, when servers are in different data centers. But if the servers (new and old) are in neighboring racks, then you can simply move the OS to a new piece of iron and immediately redeem the old one. I will write a short checklist article on how to do this. So let's go!

Default:
- Servers in one datacenter at one collar / dedikator
- You have agreed with the collar / dedikikator that you link the IP address from the old server to the new one. If this is not done, there may be jambs in case the servers are in different VLANs.
- You are given IP-KVM at least on a new server, ideally you may need an old one if you suddenly want to keep it accessible.
- I will show witchcraft on the example of CentOS 5.x
- Your server has pxe server with emergency (so-called rescue) image of CentOS 5.x and your platform.
- You know the root password from the source server.
- You have copied, on a blank sheet of paper, from the old network configuration server and disk partitioning.

So, all the conditions are met, we begin the work!
We are loading the new server over the network. To do this, for example, Supermicro needs to enable pxe boot for the first network adapter in the BIOS, reboot the server and press F12. If STP on the access ports are enabled on the network switch, when a message appears about an attempt to get ip via dhcp, press the pause button and wait 30 seconds. Then press the space bar and boot into CentOS 5.x 64 rescue.

fdisk -l see if the disks are hooked, if not, then we screw the controller's RAID driver using insmod. If disks are visible, we mark them the same way as on the old server, if there is no controller and there are disks we collect using mdadm software RAID. Oh, and do not forget about the swap.

Create a file system:
 mkswap / dev / md5 
 mkfs.ext3 / dev / md0
 mkfs.ext3 / dev / md1
 mkfs.ext3 / dev / md2
 mkfs.ext3 / dev / md3
 mkfs.ext3 / dev / md4


Mount the root partition in / mnt / sysimage
  mount / dev / md0 / mnt / sysimage 


Create a directory structure in / mnt / sysimage /, like this:
  mkdir -p / mnt / sysimage / {var, usr, home, tmp} 


We mount partitions in strict accordance with the old server:
 mount / dev / md1 / mnt / sysimage / usr
 mount / dev / md2 / mnt / sysimage / var
 mount / dev / md3 / mnt / sysimage / home
 mount / dev / md4 / mnt / sysimage / tmp


We are starting to synchronize data from the old server, here we need root access to the old server. Suppose that on the old server we have ip 1.1.1.1
  rsync -avH --numeric-ids --progress 1.1.1.1:/ / mnt / sysimage / --exclude = / dev --exclude = / proc --exclude = / sys 


As soon as the data is synchronized, we switch to the old server and stop all services, for example, mysql / httpd / nginx / proftpd and so on.

Again we return to the new server and synchronize the data again, but with the --delete option
  rsync -avH --numeric-ids --progress 1.1.1.1:/ / mnt / sysimage / --exclude = / dev --exclude = / proc --exclude = / sys --delete 


Now we twist into the “new server” and begin to make changes that are necessary for the server to load:
 mkdir / mnt / sysimage / {proc, sys, dev}
 mount --bind / dev / mnt / sysimage / dev
 mount -t proc none / mnt / sysimage / proc
 mount -t sysfs none / mnt / sysimage / sys
 chroot / mnt / sysimage


If on the old server you had sda / sdb / sdc and on the new md0 / md1 / md2 or vice versa, you need to make the appropriate edits in / etc / fstab and /boot/grub/grub.conf
Entries in fstab from:
 / dev / sda1 / ext3 defaults 1 1
 / dev / sda2 / home ext3 defaults 1 2
 / dev / sda3 / tmp ext3 defaults 1 2
 / dev / sda4 / var ext3 defaults 1 2
 / dev / sda5 / usr ext3 defaults 1 2
 tmpfs / dev / shm tmpfs defaults 0 0
 devpts / dev / pts devpts gid = 5, mode = 620 0 0
 sysfs / sys sysfs defaults 0 0
 proc / proc proc defaults 0 0
 / dev / sda6 swap swap defaults 0 0


We give to:
 / dev / md0 / ext3 defaults 1 1
 / dev / md4 / home ext3 defaults 1 2
 / dev / md3 / tmp ext3 defaults 1 2
 / dev / md2 / var ext3 defaults 1 2
 / dev / md1 / usr ext3 defaults 1 2
 tmpfs / dev / shm tmpfs defaults 0 0
 devpts / dev / pts devpts gid = 5, mode = 620 0 0
 sysfs / sys sysfs defaults 0 0
 proc / proc proc defaults 0 0
 / dev / md5 swap swap defaults 0 0


And go to the grub.conf edits.
 title CentOS (2.6.18-238.9.1.el5)
         root (hd0,0)
         kernel /boot/vmlinuz-2.6.18-238.9.1.el5 ro root = / dev / sda1
         initrd /boot/initrd-2.6.18-238.9.1.el5.img


we bring to the form:
 title CentOS (2.6.18-238.9.1.el5)
         root (hd0,0)
         kernel /boot/vmlinuz-2.6.18-238.9.1.el5 ro root = / dev / md0 panic = 30
         initrd /boot/initrd-2.6.18-238.9.1.el5.img


Please pay attention to panic = 30 in the initialization line of the kernel, this is necessary in case you make a mistake somewhere and the server crashes in Kernel Panic. Without this panic = 30, the server will wait for the Hardware reset, with it it will restart in 30 seconds.

Now we need to put grub:
 # grub
 grub> root (hd0,0)
 root (hd0,0)
  Filesystem type is ext2fs, partition type 0x83
 grub> setup (hd0)
 setup (hd0)
  Checking if "/ boot / grub / stage1" exists ... yes
  Checking if "/ boot / grub / stage2" exists ... yes
  Checking if "/ boot / grub / e2fs_stage1_5" exists ... yes
  Running "embed / boot / grub / e2fs_stage1_5 (hd0)" ... 15 sectors are embedded.
 succeeded
  Running "install / boot / grub / stage1 (hd0) (hd0) 1 + 15 p (hd0,0) / boot / grub / stage2 /boot/grub/grub.conf" ... succeeded
 Done
 grub> quit
 quit
 #


Hornbeam not cursing established, then everything is fine. Just in case, check:
 # dd if = / dev / sda count = 10 | strings | grep stage
 Loading stage1.5
 / boot / grub / stage2 /boot/grub/grub.conf


Now we need to create a new initrd, because in the old may not be for example mdadm.
 gzip /boot/initrd-2.6.18-238.9.1.el5.img
 mkinitrd /boot/initrd-2.6.18-238.9.1.el5.img 2.6.18-238.9.1.el5


Enable firstboot with the command:
  chkconfig firstboot on 


And we reboot the server several times by pressing Ctrl + D

While the server reboots, let's go to the old one and depending on whether we need it any further, disconnect the network or change the ip address. If it is not necessary for all network adapters in the / etc / sysconfig / network-scripts / ifcfg-ethX config (where X is the network adapter number), ONBOOT = yes is set to ONBOOT = no and the network is /etc/init.d/network stop . If we need the old server, then in the same configs we set new network settings and “restart” the network /etc/init.d/network restart

So with the old server, we have finished, moving to a new one. In IP-KVM, we already see the blue window ncurses which gave us the firstboot, go to the network configuration and drive in the old network settings. Then restart the server for the purity of the experiment.

Perhaps all of the above will seem difficult for you and the hand will reach out to write a comment in the spirit of “well, and why this hemorrhoids?”, Do not rush, in practice all operations are performed very quickly and the downtime is minimal.

If you find an error in the text, please write to me in PM, I will be extremely grateful.

If you need to clarify some points, do not hesitate to ask! The above procedure was performed at least half to hundreds of times.

If you are interested in migration issues, I can tell you about the migration of the OpenVZ container from the server to which there is no root access or about the migration of the physical machine to the OpenVZ container.

Source: https://habr.com/ru/post/119972/


All Articles