There was a server in a faraway country. It was not bad in terms of technical specifications for its time - Intel Core Quad Q6600 2.4GHz 8GB RAM, Intel DQ965GF motherboard, 3ware7xxx / 8xxx raid controller and 2 SATA 300GB drives in a raid 1 array.
And once one of the disks in the raid decided to die on this server - and once decided, it died. It was natural to assume that where one disk died, there the second one can die — it needs to be changed. Yes, and expanding disk space does not hurt, we thought.
Somehow, with a sin in half, they bought new 2TB disks - a crisis and in distant bourgeois countries were with hard disks. The server was critical, but it was possible to turn it off and torment it for a while - there were doubles.
We decided to upgrade the software ...
The server was CentOS 4.5 x64, the old one was tortured, the vmware server was installed 2nd and under the server there were already three virtual machines with windows 2003 server with ms sql and some programs, freebsd and suse. Over the years, from an important need was only windows and in the future it was planned to plant some more virtual machines there - it means you need to change the platform.
We stopped at vSphere hypervisor - but for a simple ESXi. Having studied the Internet - I did not find the experience of installing ESXi on such a specific hardware - the Intel 82556DM network card and the raid controller could not work - the search in the official compatibility list did not give a positive result. It would be inconvenient to install an additional network card and it would take a long time. Well and without raid it was possible to do.
')
In general, we decided to try. I am writing a letter to technical support in order to install new hard drives and install esxi 5.0. After some time, they said that they installed esxi 4.1u1, that the raid didn’t see, but on the integrated ICH controller it works and you can see both drives. Gave a password so we went to see this happiness.
Went to see. It seems to work. Now the second task is to migrate the windows server 2003 virtual machine from the vmware server to the vmware vsphere hypervisor environment. Came up with several options -
first upload a disk image or a ready virtual machine with centos / linux to a stack and connect old disks as Raw devices mapping, install a converter and convert to esxi.
the second is to try to trite the ext3 partition to esxi - but unfortunately it seems impossible.
the third is to copy over SCP / FTP / HTTP somewhere else, convert there and then copy back over SCP or via vclient.
the fourth is to put the old centos vmware converter inside and convert it in place.
the fifth is to raise another virtual machine with windows and put the converter there.
While thinking - technical support inserted the disks into the USB-SATA adapter and connected to the server - bare ESXi, of course, could not understand what can be done with this option. Then they wrote that they were doubtful to see the raid array except from its own operating system.
They tried to install the converter into Centos - they installed it, but for some reason it did not work to connect to it remotely - possibly due to non-standard ports specified when installing the converter, since the standard ports were occupied. Well, after thinking, it seemed that the entire virtual machine was first downloaded via the client to my local machine, and then back - 30 gig of traffic and God forbid the Internet to fall off - generally unreliable.
Plus, the converter simply does not convert the machine into a file — it necessarily requires connecting to either the host or vCenter.
We did not try to mount ext3 partitions to esxi - in the options of the mount command, you did not see ext3 of the file system type - it looked bad.
In general, we did this - I packed the files of the virtual machine and downloaded them to my computer. I installed a vmware converter and converted it to a local esxi host. For reliability and compatibility, I connected via ssh to the local esxi and archived the disabled virtual machine using tar - the virtual machine directories are located in / vmfs / volumes / datastore /. The parameter z compresses the archive.
It turned out a 7GB file from 17GB.
I wrote to technical support to install new winchesters and run esxi. Then I connected to the remote esxi by WinSCP and started uploading the archive. The speed just killed - 30kB / s, about 3 days of copying. As it turned out, the resume is not supported, tar in esxi does not seem to work with broken archives. Just in case, I decided to try the standard vclient to upload the archive to the repository - the speed was about 10 times more and in 7 hours the archive began to download.
Then I went through ssh to the remote esxi and unpacked the tar of the virtual machine, added it to Inventory and started it. Updated vmware tools, network card and restored network settings.
To improve reliability, I created a virtual machine with windows 2003 server (so that vmware memory compression technology worked and less resources were spent). I added another network card with “gray” IP addresses to both machines, created another vSwitch, added VMKernel to it, ticked Management traffic in it and assigned an ip address from the same “gray” range. Put and configured veeam backup. To reduce paid traffic through the Internet and for security, veeam connects to esxi via a “gray” ip address and makes copies of virtual machines from one storage hard drive to the second — so that in case of a failure, you can quickly recover.
If it were not for the slow speed of copying via the Internet, then a simple 30 minutes maximum would be.
Most likely, we have missed some migration options - can someone tell me clever ideas, as there are still a couple of about the same tasks for migration.