📜 ⬆️ ⬇️

Restore data storage and VMFS partitions. Lifting the EMC iomega from that light ...

Hello! Recently, I’ve been increasingly confronted with the fact that many admins use cheap storage systems (SOHO) for production environments ... And rarely think about data availability and fault tolerance of solutions ... Alas, not many people also think about backups and backups ...

And today an interesting specimen came to me "for treatment":


Wonderful copy of EMC (not even Lenovo yet) iomega storcenter px4 (which does not load beyond 25%)
')
Read the details of the recovery under the cat.

So let's get started.

We have two tasks:

1) Recover some data from disks
2) Restore storage operability

First we need to understand what we are dealing with and the manufacturer’s website and the document with the specification of DSS - PDF will help us in this.

Proceeding from the PDF above, you can understand that the storage system is nothing more than a small server on an Intel processor and, obviously, not on Windows, with some kind of Linux on board.

In the documentation itself there is no mention of any RAID controller - so I had to open the patient and make sure that the hardware stuffing does not contain any surprises.

So, what we received from the owners of the storage system and the document:

1) we have storage with disks in some RAID (without understanding in what raid the customer collected disks);
2) there is no hardware RAID in the storage system - so we rely on the software solution;
3) in the storage system, one of Linux is used (as the system does not boot, see what did not work there);
4) There is an understanding of what we are looking for on the disks (there are a couple of VMFS partitions that were given for a VM for ESXi and a couple of file balls for general use).

Recovery Plan:

1) install the OS and connect the disks;
2) we look that useful can be pulled out from the information on disks;
3) Collect Raid and try to mount partitions;
4) mount VMFS partitions;
5) merge all the necessary information to another store;
6) think about what to do with EMC.

Step # 1


Since it wasn’t possible to “revive” the storage system (If anyone knows the methods of flashing and can share the utilities - well in the comments or in person) - we connect the disks to another system:



In my case, an old server on AMD Phenom was at hand ... The most important thing was to find the motherboard where you can connect at least 4 disks from the storage system + 1 disk to install the OS and other utilities.

The OS was chosen by Debian 8, as it is best of all friends with both vmfs-tools and iscsi target (ubunta catches glitches).

Step 2


root@mephistos-GA-880GA-UD3H:~# fdisk -l Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x000c0a96 Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 1920251903 1920249856 915.7G 83 Linux /dev/sda2 1920253950 1953523711 33269762 15.9G 5 Extended /dev/sda5 1920253952 1953523711 33269760 15.9G 82 Linux swap / Solaris Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: BFAE2033-B502-4C5B-9959-82E50F8E9920 Device Start End Sectors Size Type /dev/sdb1 72 41961848 41961777 20G Microsoft basic data /dev/sdb2 41961856 3907029106 3865067251 1.8T Microsoft basic data Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: B6515182-BF5B-4DED-AAB1-5AE489BF23B0 Device Start End Sectors Size Type /dev/sdc1 72 41961848 41961777 20G Microsoft basic data /dev/sdc2 41961856 3907029106 3865067251 1.8T Microsoft basic data Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 378627A1-2696-4DAC-88D1-B90AFD2B1A98 Device Start End Sectors Size Type /dev/sdd1 72 41961848 41961777 20G Microsoft basic data /dev/sdd2 41961856 3907029106 3865067251 1.8T Microsoft basic data Disk /dev/sde: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: BA871E29-DB67-4266-A8ED-E5A33D6C24D2 Device Start End Sectors Size Type /dev/sde1 72 41961848 41961777 20G Microsoft basic data /dev/sde2 41961856 3907029106 3865067251 1.8T Microsoft basic data 

SDA - a disk with OS, other 4i - disks from SHD

As you can see on the disks there are two sections:
20 GB - as I understand it under the OS of the storage system itself
1.8 TB - for user data

All drives have an identical breakdown - from which it can be concluded that they were a single array in RAID.

 root@mephistos-GA-880GA-UD3H:~# lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT sdd |-sdd2 linux_raid_member px4-300r-THXLON:1 27b7ba6d-6a41-dd56-ad4b-7652f461a3b6 `-sdd1 linux_raid_member px4-300r-THXLON:0 b8b8526a-37ef-2a9b-e9e7-c249645dacb0 sdb |-sdb2 linux_raid_member px4-300r-THXLON:1 27b7ba6d-6a41-dd56-ad4b-7652f461a3b6 `-sdb1 linux_raid_member px4-300r-THXLON:0 b8b8526a-37ef-2a9b-e9e7-c249645dacb0 sde |-sde2 linux_raid_member px4-300r-THXLON:1 27b7ba6d-6a41-dd56-ad4b-7652f461a3b6 `-sde1 linux_raid_member px4-300r-THXLON:0 b8b8526a-37ef-2a9b-e9e7-c249645dacb0 sdc |-sdc2 linux_raid_member px4-300r-THXLON:1 27b7ba6d-6a41-dd56-ad4b-7652f461a3b6 `-sdc1 linux_raid_member px4-300r-THXLON:0 b8b8526a-37ef-2a9b-e9e7-c249645dacb0 sda |-sda2 |-sda5 swap 451578bf-ed6d-4ee7-ba91-0c176c433ac9 [SWAP] `-sda1 ext4 fdd535f2-4350-4227-bb5e-27e402c64f04 / 

Step 3


FSTYPE of sections is defined as linux_raid_member so we will try to see what we can collect from them.

 root@mephistos-GA-880GA-UD3H:~# apt-get install mdadm 

Putting an array:

 root@mephistos-GA-880GA-UD3H:~# mdadm --assemble /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 mdadm: /dev/md0 has been started with 4 drives. root@mephistos-GA-880GA-UD3H:~# mkdir /mnt/md0 root@mephistos-GA-880GA-UD3H:~# mount /dev/md0 /mnt/md0/ mount: unknown filesystem type 'LVM2_member' 

When we mounted the array, we were given a hint - filesystem type 'LVM2_member'.

Install LVM2 and scan disks:

 root@mephistos-GA-880GA-UD3H:~# apt-get install lvm2 

 root@mephistos-GA-880GA-UD3H:~# lvmdiskscan /run/lvm/lvmetad.socket: connect failed: No such file or directory WARNING: Failed to connect to lvmetad. Falling back to internal scanning. /dev/ram0 [ 64.00 MiB] /dev/md0 [ 20.01 GiB] LVM physical volume /dev/ram1 [ 64.00 MiB] /dev/sda1 [ 915.65 GiB] /dev/ram2 [ 64.00 MiB] /dev/ram3 [ 64.00 MiB] /dev/ram4 [ 64.00 MiB] /dev/ram5 [ 64.00 MiB] /dev/sda5 [ 15.86 GiB] /dev/ram6 [ 64.00 MiB] /dev/ram7 [ 64.00 MiB] /dev/ram8 [ 64.00 MiB] /dev/ram9 [ 64.00 MiB] /dev/ram10 [ 64.00 MiB] /dev/ram11 [ 64.00 MiB] /dev/ram12 [ 64.00 MiB] /dev/ram13 [ 64.00 MiB] /dev/ram14 [ 64.00 MiB] /dev/ram15 [ 64.00 MiB] 0 disks 18 partitions 0 LVM physical volume whole disks 1 LVM physical volume 

 root@mephistos-GA-880GA-UD3H:~# lvdisplay /run/lvm/lvmetad.socket: connect failed: No such file or directory WARNING: Failed to connect to lvmetad. Falling back to internal scanning. --- Logical volume --- LV Path /dev/66e7945b_vg/vol1 LV Name vol1 VG Name 66e7945b_vg LV UUID No8Pga-YZaE-7ubV-NQ05-7fMh-DEa8-p1nP4c LV Write Access read/write LV Creation host, time , LV Status NOT available LV Size 20.01 GiB Current LE 5122 Segments 1 Allocation inherit Read ahead sectors auto 

As you can see, we found the volume group. It is logical to assume that the second part can be assembled in a similar way.

 root@mephistos-GA-880GA-UD3H:~# mdadm --assemble /dev/md1 /dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sde2 mdadm: /dev/md1 has been started with 4 drives. 

 root@mephistos-GA-880GA-UD3H:~# lvmdiskscan /run/lvm/lvmetad.socket: connect failed: No such file or directory WARNING: Failed to connect to lvmetad. Falling back to internal scanning. /dev/ram0 [ 64.00 MiB] /dev/md0 [ 20.01 GiB] LVM physical volume /dev/ram1 [ 64.00 MiB] /dev/sda1 [ 915.65 GiB] /dev/md1 [ 5.40 TiB] LVM physical volume /dev/ram2 [ 64.00 MiB] /dev/ram3 [ 64.00 MiB] /dev/ram4 [ 64.00 MiB] /dev/ram5 [ 64.00 MiB] /dev/sda5 [ 15.86 GiB] /dev/ram6 [ 64.00 MiB] /dev/ram7 [ 64.00 MiB] /dev/ram8 [ 64.00 MiB] /dev/ram9 [ 64.00 MiB] /dev/ram10 [ 64.00 MiB] /dev/ram11 [ 64.00 MiB] /dev/ram12 [ 64.00 MiB] /dev/ram13 [ 64.00 MiB] /dev/ram14 [ 64.00 MiB] /dev/ram15 [ 64.00 MiB] 0 disks 18 partitions 0 LVM physical volume whole disks 2 LVM physical volumes 

Now finds 2 LVM physical volumes

 root@mephistos-GA-880GA-UD3H:~# lvdisplay /run/lvm/lvmetad.socket: connect failed: No such file or directory WARNING: Failed to connect to lvmetad. Falling back to internal scanning. --- Logical volume --- LV Path /dev/3b9b96bf_vg/lv6231c27b LV Name lv6231c27b VG Name 3b9b96bf_vg LV UUID wwRwuz-TjVh-aT6G-r5iR-7MpJ-tZ0P-wjbb6g LV Write Access read/write LV Creation host, time , LV Status NOT available LV Size 2.00 TiB Current LE 524288 Segments 2 Allocation inherit Read ahead sectors auto --- Logical volume --- LV Path /dev/3b9b96bf_vg/lv22a5c399 LV Name lv22a5c399 VG Name 3b9b96bf_vg LV UUID GHAUtd-qvjL-n8Fa-OuCo-sqtD-CBzg-M46Y9o LV Write Access read/write LV Creation host, time , LV Status NOT available LV Size 3.00 GiB Current LE 768 Segments 1 Allocation inherit Read ahead sectors auto --- Logical volume --- LV Path /dev/3b9b96bf_vg/lv13ed5d5e LV Name lv13ed5d5e VG Name 3b9b96bf_vg LV UUID iMQZFw-Xrmj-cTkq-E1NT-VwUa-X0E0-MpbCps LV Write Access read/write LV Creation host, time , LV Status NOT available LV Size 10.00 GiB Current LE 2560 Segments 2 Allocation inherit Read ahead sectors auto --- Logical volume --- LV Path /dev/3b9b96bf_vg/lv7a6430c7 LV Name lv7a6430c7 VG Name 3b9b96bf_vg LV UUID UlFd4y-huNe-Z501-EylQ-mOd6-kAGt-jmvlqa LV Write Access read/write LV Creation host, time , LV Status NOT available LV Size 1.00 TiB Current LE 262144 Segments 1 Allocation inherit Read ahead sectors auto --- Logical volume --- LV Path /dev/3b9b96bf_vg/lv47d612ce LV Name lv47d612ce VG Name 3b9b96bf_vg LV UUID pzlrpE-dikm-6Rtn-GU6O-SeEx-3QJJ-cK3cdR LV Write Access read/write LV Creation host, time s-mars-stor-1, 2017-02-04 16:14:49 +0200 LV Status NOT available LV Size 1.32 TiB Current LE 345600 Segments 2 Allocation inherit Read ahead sectors auto --- Logical volume --- LV Path /dev/66e7945b_vg/vol1 LV Name vol1 VG Name 66e7945b_vg LV UUID No8Pga-YZaE-7ubV-NQ05-7fMh-DEa8-p1nP4c LV Write Access read/write LV Creation host, time , LV Status NOT available LV Size 20.01 GiB Current LE 5122 Segments 1 Allocation inherit Read ahead sectors auto 

From the result it can be seen that the storage system collected the VG at LVM and further crushed into LV the required sizes.

 root@mephistos-GA-880GA-UD3H:~# vgdisplay /run/lvm/lvmetad.socket: connect failed: No such file or directory WARNING: Failed to connect to lvmetad. Falling back to internal scanning. --- Volume group --- VG Name 3b9b96bf_vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 49 VG Access read/write VG Status resizable MAX LV 0 Cur LV 5 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 5.40 TiB PE Size 4.00 MiB Total PE 1415429 Alloc PE / Size 1135360 / 4.33 TiB Free PE / Size 280069 / 1.07 TiB VG UUID Qg8rb2-rQpK-zMRL-qVzm-RU5n-YNv8-qOV6yZ --- Volume group --- VG Name 66e7945b_vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 20.01 GiB PE Size 4.00 MiB Total PE 5122 Alloc PE / Size 5122 / 20.01 GiB Free PE / Size 0 / 0 VG UUID Sy2RsX-h51a-vgKt-n1Sb-u1CA-HBUf-C9sUNT 

 root@mephistos-GA-880GA-UD3H:~# lvscan /run/lvm/lvmetad.socket: connect failed: No such file or directory WARNING: Failed to connect to lvmetad. Falling back to internal scanning. inactive '/dev/3b9b96bf_vg/lv6231c27b' [2.00 TiB] inherit inactive '/dev/3b9b96bf_vg/lv22a5c399' [3.00 GiB] inherit inactive '/dev/3b9b96bf_vg/lv13ed5d5e' [10.00 GiB] inherit inactive '/dev/3b9b96bf_vg/lv7a6430c7' [1.00 TiB] inherit inactive '/dev/3b9b96bf_vg/lv47d612ce' [1.32 TiB] inherit inactive '/dev/66e7945b_vg/vol1' [20.01 GiB] inherit 

Activate partitions and try to mount them:

 root@mephistos-GA-880GA-UD3H:~# modprobe dm-mod 

 root@mephistos-GA-880GA-UD3H:~# vgchange -ay /run/lvm/lvmetad.socket: connect failed: No such file or directory WARNING: Failed to connect to lvmetad. Falling back to internal scanning. 5 logical volume(s) in volume group "3b9b96bf_vg" now active 1 logical volume(s) in volume group "66e7945b_vg" now active 

 root@mephistos-GA-880GA-UD3H:~# mkdir /mnt/1 root@mephistos-GA-880GA-UD3H:~# mkdir /mnt/2 root@mephistos-GA-880GA-UD3H:~# mkdir /mnt/3 root@mephistos-GA-880GA-UD3H:~# mkdir /mnt/4 root@mephistos-GA-880GA-UD3H:~# mkdir /mnt/5 root@mephistos-GA-880GA-UD3H:~# mkdir /mnt/6 root@mephistos-GA-880GA-UD3H:~# mount /dev/3b9b96bf_vg/lv6231c27b /mnt/1 root@mephistos-GA-880GA-UD3H:~# mount /dev/3b9b96bf_vg/lv22a5c399 /mnt/2 NTFS signature is missing. Failed to mount '/dev/mapper/3b9b96bf_vg-lv22a5c399': Invalid argument The device '/dev/mapper/3b9b96bf_vg-lv22a5c399' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (eg /dev/sda, not /dev/sda1)? Or the other way around? root@mephistos-GA-880GA-UD3H:~# mount /dev/3b9b96bf_vg/lv13ed5d5e /mnt/3 root@mephistos-GA-880GA-UD3H:~# mount /dev/3b9b96bf_vg/lv7a6430c7 /mnt/4 NTFS signature is missing. Failed to mount '/dev/mapper/3b9b96bf_vg-lv7a6430c7': Invalid argument The device '/dev/mapper/3b9b96bf_vg-lv7a6430c7' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (eg /dev/sda, not /dev/sda1)? Or the other way around? root@mephistos-GA-880GA-UD3H:~# mount /dev/3b9b96bf_vg/lv7a6430c7 /mnt/5 NTFS signature is missing. Failed to mount '/dev/mapper/3b9b96bf_vg-lv7a6430c7': Invalid argument The device '/dev/mapper/3b9b96bf_vg-lv7a6430c7' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (eg /dev/sda, not /dev/sda1)? Or the other way around? root@mephistos-GA-880GA-UD3H:~# mount /dev/3b9b96bf_vg/lv47d612ce /mnt/5 NTFS signature is missing. Failed to mount '/dev/mapper/3b9b96bf_vg-lv47d612ce': Invalid argument The device '/dev/mapper/3b9b96bf_vg-lv47d612ce' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (eg /dev/sda, not /dev/sda1)? Or the other way around? root@mephistos-GA-880GA-UD3H:~# mount /dev/66e7945b_vg/vol1 /mnt/6 

As you can see, only part of the sections were mounted. And all because of the other partitions - VMFS.

Step 4


 root@mephistos-GA-880GA-UD3H:/mnt/3# apt-get install vmfs-tools 

Alas, it was not possible to avoid dancing with a tambourine ... VMFS did not want to mount directly (there is a suspicion that this is due to the new version of VMFS and the old vmfs-tools)

 root@mephistos-GA-880GA-UD3H:/mnt/3# vmfs-fuse /dev/3b9b96bf_vg/lv7a6430c7 /mnt/4 VMFS VolInfo: invalid magic number 0x00000000 VMFS: Unable to read volume information Trying to find partitions Unable to open device/file "/dev/3b9b96bf_vg/lv7a6430c7". Unable to open filesystem 

Break a bunch of forums, the solution was found.

Create a loop device:

 losetup -r /dev/loop0 /dev/mapper/3b9b96bf_vg-lv47d612ce 

A little about kpartx can be read here .

 root@triplesxi:~# apt-get install kpartx 

 root@triplesxi:~# kpartx -a -v /dev/loop0 add map loop0p1 (253:6): 0 2831153119 linear /dev/loop0 2048 

We try to mount the resulting mapper:

 root@triplesxi:~# vmfs-fuse /dev/mapper/loop0p1 /mnt/vmfs/ VMFS: Warning: Lun ID mismatch on /dev/mapper/loop0p1 ioctl: Invalid argument ioctl: Invalid argument 

Mounted successfully! (you can score for errors as Lun ID mismatch)

Here we will skip the succession of unsuccessful attempts to copy the virtual machines themselves from the mounted datastor.

As it turned out, files a couple of hundred gigabytes in size cannot be copied:

 root@triplesxi:~# cp /mnt/vmfs/tstst/tstst_1-flat.vmdk /root/ cp: error reading '/mnt/vmfs/tstst/tstst_1-flat.vmdk': Input/output error cp: failed to extend '/root/tstst_1-flat.vmdk': Input/output error 

Step number 5


Since virtual machines from mounted datastores could not be merged ... Let's try to connect these partitions to real ESXi and copy virtuals through it (it should be friends with VMFS).

Let's connect our sections to ESXi by means of iSCSi (we will describe the process briefly):

1) Install the iscsitarget package
2) enter the required parameters in /etc/iet/ietd.conf
3) We start the service iscsitarget start service

If everything is “OK”, we create on the ESXi Software iSCSi controller and write our server in Dynamic discovery with mounted partitions

As you can see, the sections successfully pulled up



Since the LUN ID of the resulting partition does not match the fact that it is registered in the metadata on the partition itself, to add datastores to the host, we will use the KB from VMware .

Datastors have been restored and you can easily merge information from them.

PS I merge on scp by enabling access on the SSH server - this is much faster than merging the virtual via the web or a regular client.

Step 6


How to restore the storage system itself - I'll never know. If someone has the firmware files and information on how to connect the console, I will gladly accept help.

Source: https://habr.com/ru/post/322754/


All Articles