📜 ⬆️ ⬇️

Galvanization of a corpse: how was it possible to revive a beaten HDD to store anything unnecessary

I recently got a broken external hard drive ... Well, how did you get caught? He bought on the cheap.

Disk as a disk: iron box, inside - USB2SATA controller and Samsung laptop disk on 1 TB. . According to the description of the seller it turned out that the USB controller is buggy. At first, they say, he writes and reads well, and then gradually begins to slow down and generally falls off. The phenomenon for external drives without additional power is quite frequent, so I, of course, believed him. Well, what - cheap.

So, I happily take apart the box, take out a disk from there and stick it in the adapter, which has been tested by time and adversity. The disk is turned on, started, defined, and even mounted in Linux. The disc found the file system NTFS and a dozen movies. No, not about erotic adventures, but quite the opposite: “Leviathans” are all. It would seem - hurray! But no, it was just the beginning.

The SMART review showed a disappointing picture: the Raw Read Error Rate attribute dropped to one (at threshold 51), which means only one thing: the disk has something very, very wrong with reading from the plates. The remaining attributes, however, were within reason, but that was no easier.
')
Attempting to format the drive led to the expected result: write error. It was possible, of course, to make a list of bad sectors with the standard badblocks utility, and then slip this list when creating the file system. But I rejected this idea as impractical: for too long I would have to wait for the result. Yes, and, as it turned out, a compiled list of sectors would be useless: in the damaged areas of the sector, they are unstable, so what is read once may cause a reading error the next time.

Having played enough with all sorts of utilities, I found out the following details:
  1. There are many broken sectors, but they are not randomly located all over the disk, but in dense groups. Between these groups there are quite extensive areas where reading and writing go without any problems.
  2. An attempt to correct a broken sector by overwriting (so that the controller replaces it with a backup one) does not work. Sometimes after that the sector is read, sometimes not. Moreover, sometimes an attempt to write to a broken sector leads to the fact that the disc falls off the system for a few seconds (apparently, the controller of the disc itself resets). When reading resets there is no, but trying to read a broken sector takes half a second, or even more.
  3. “Broken areas” are fairly stable. So, the very first of them starts in the region of the 45th gigabyte from the beginning of the disk, and stretches quite far (how far, it was not possible to find out with a swoop). Through trial and error, it was also possible to grope the beginning of a second such area somewhere in the middle of the disk.

Immediately the thought arose: what if we split the disk into two or three partitions in such a way that the “broken fields” remain between them? Then the disc can be used to store something not very valuable (for example, to watch films at the same time). Naturally, for this you first need to figure out the boundaries of the "good" and "broken" areas.

No sooner said than done. A utility was written on the knee, reading from the disk until the bad sector was caught. After this, the utility marked as failed (in its own nameplate, of course) a whole area of ​​a given length. Next, the marked area was skipped (then check it - it was already marked as bad) and the utility read the sectors further. After a couple of experiments, it was decided to mark a bad region of 10 megabytes: this is already enough for the utility to work quickly, but also small enough for the loss of disk space to become too large.

For clarity, the result of the work was recorded in the form of a picture: the white dots are good sectors, the red ones are bad ones, the gray ones are a bad area around the bad sectors. After nearly a day of work, the list of broken areas and a clear picture of their location were ready.
Here it is, this picture:



Interesting, isn't it? The damaged areas turned out to be much larger than I imagined, but the intact areas are clearly more than half the disk space. It seems to be a pity to lose so much space, but I do not want to fence a dozen small partitions.

But after all, we already have the 21st century, the time of new technologies and disk arrays! So, it is possible to glue one disk array from these small partitions, create a file system on it and not know when it is burning.

According to the map of broken areas, a mega-team was created to create partitions. I used GPT, so as not to sweat about which ones should be primary and which extended:

# parted -s -a none /dev/sdc unit s mkpart 1 20480 86466560 mkpart 2 102686720 134410240 mkpart 3 151347200 218193920 mkpart 4 235274240 285306880 mkpart 5 302489600 401612800 mkpart 6 418078720 449617920 mkpart 7 466206720 499712000 mkpart 8 516157440 548966400 mkpart 9 565186560 671539200 mkpart 10 687595520 824811520 mkpart 11 840089600 900280320 mkpart 12 915640320 976035840 mkpart 13 991354880 1078026240 mkpart 14 1092689920 1190871040 mkpart 15 1205288960 1353093120 mkpart 16 1366794240 1419919360 mkpart 17 1433600000 1485148160 mkpart 18 1497927680 1585192960 mkpart 19 1597624320 1620684800 mkpart 20 1632808960 1757368320 mkpart 21 1768263680 1790054400 mkpart 22 1800908800 1862307840 mkpart 23 1872199680 1927905280 mkpart 24 1937203200 1953504688 


The team worked for quite a long time (several minutes). Total turned out 24 (!) Partitions, each in its own size.

Partitions
 # parted /dev/sdc print Model: SAMSUNG HM100UI (scsi) Disk /dev/sdc: 1000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 10.5MB 44.3GB 44.3GB 1 2 52.6GB 68.8GB 16.2GB 2 3 77.5GB 112GB 34.2GB 3 4 120GB 146GB 25.6GB 4 5 155GB 206GB 50.8GB 5 6 214GB 230GB 16.1GB 6 7 239GB 256GB 17.2GB 7 8 264GB 281GB 16.8GB 8 9 289GB 344GB 54.5GB 9 10 352GB 422GB 70.3GB 10 11 430GB 461GB 30.8GB 11 12 469GB 500GB 30.9GB 12 13 508GB 552GB 44.4GB 13 14 559GB 610GB 50.3GB 14 15 617GB 693GB 75.7GB 15 16 700GB 727GB 27.2GB 16 17 734GB 760GB 26.4GB 17 18 767GB 812GB 44.7GB 18 19 818GB 830GB 11.8GB 19 20 836GB 900GB 63.8GB 20 21 905GB 917GB 11.2GB 21 22 922GB 954GB 31.4GB 22 23 959GB 987GB 28.5GB 23 24 992GB 1000GB 8346MB 24 


The next step is to make a single disc out of them. The perfectionist inside me suggested that it would be most appropriate to stir up some kind of RAID6 array that is resistant to failures. The practitioner, on the other hand, objected that, anyway, the partition that fell into the astral would have nothing to replace, so the usual JBOD would do - why would you lose space for nothing? Practitioner won:

 # mdadm --create /dev/md0 --chunk=16 --level=linear --raid-devices=24 /dev/sdc1 /dev/sdc2 /dev/sdc3 /dev/sdc4 /dev/sdc5 /dev/sdc6 /dev/sdc7 /dev/sdc8 /dev/sdc9 /dev/sdc10 /dev/sdc11 /dev/sdc12 /dev/sdc13 /dev/sdc14 /dev/sdc15 /dev/sdc16 /dev/sdc17 /dev/sdc18 /dev/sdc19 /dev/sdc20 /dev/sdc21 /dev/sdc22 /dev/sdc23 /dev/sdc24 

Well that's all. It remains to create a file system and mount the animated disk:

 # mkfs.ext2 -m 0 /dev/md0 # mount /dev/md0 /mnt/ext 

The disk turned out to be quite roomy, 763 gigabytes (i.e., it was possible to use 83% of the disk capacity). In other words, only 17% of the initial terabyte went to heap:

 $ df -h Filesystem Size Used Avail Use% Mounted on rootfs 9.2G 5.6G 3.2G 64% / ... /dev/md0 763G 101G 662G 14% /mnt/ext 

The test set of trash movies poured onto the disk without errors. True, the write speed was small and swam from 6 to 25 megabytes per second. The reading was stable, with a speed of 25-30 Mb / s, that is, it was limited by an adapter connected to USB 2.0.

Of course, such a perversion cannot be used to store something important, but it may be useful as entertainment. When the question is, to disassemble the magnets on the disk or to suffer it first, my answer is: “Of course, suffer!”.

Source: https://habr.com/ru/post/252211/


All Articles