I recently got a broken external hard drive ... Well, how did you get caught? He bought on the cheap.
Disk as a disk: iron box, inside - USB2SATA controller and
Samsung laptop disk on 1 TB. . According to the description of the seller it turned out that the USB controller is buggy. At first, they say, he writes and reads well, and then gradually begins to slow down and generally falls off. The phenomenon for external drives without additional power is quite frequent, so I, of course, believed him. Well, what - cheap.
So, I happily take apart the box, take out a disk from there and stick it in the adapter, which has been tested by time and adversity. The disk is turned on, started, defined, and even mounted in Linux. The disc found the file system NTFS and a dozen movies. No, not about erotic adventures, but quite the opposite: “Leviathans” are all. It would seem - hurray! But no, it was just the beginning.
The SMART review showed a disappointing picture: the Raw Read Error Rate attribute dropped to one (at threshold 51), which means only one thing: the disk has something very, very wrong with reading from the plates. The remaining attributes, however, were within reason, but that was no easier.
')
Attempting to format the drive led to the expected result: write error. It was possible, of course, to make a list of bad sectors with the standard badblocks utility, and then slip this list when creating the file system. But I rejected this idea as impractical: for too long I would have to wait for the result. Yes, and, as it turned out, a compiled list of sectors would be useless: in the damaged areas of the sector, they are unstable, so what is read once may cause a reading error the next time.
Having played enough with all sorts of utilities, I found out the following details:
- There are many broken sectors, but they are not randomly located all over the disk, but in dense groups. Between these groups there are quite extensive areas where reading and writing go without any problems.
- An attempt to correct a broken sector by overwriting (so that the controller replaces it with a backup one) does not work. Sometimes after that the sector is read, sometimes not. Moreover, sometimes an attempt to write to a broken sector leads to the fact that the disc falls off the system for a few seconds (apparently, the controller of the disc itself resets). When reading resets there is no, but trying to read a broken sector takes half a second, or even more.
- “Broken areas” are fairly stable. So, the very first of them starts in the region of the 45th gigabyte from the beginning of the disk, and stretches quite far (how far, it was not possible to find out with a swoop). Through trial and error, it was also possible to grope the beginning of a second such area somewhere in the middle of the disk.
Immediately the thought arose: what if we split the disk into two or three partitions in such a way that the “broken fields” remain between them? Then the disc can be used to store something not very valuable (for example, to watch films at the same time). Naturally, for this you first need to figure out the boundaries of the "good" and "broken" areas.
No sooner said than done. A utility was written on the knee, reading from the disk until the bad sector was caught. After this, the utility marked as failed (in its own nameplate, of course) a whole area of ​​a given length. Next, the marked area was skipped (then check it - it was already marked as bad) and the utility read the sectors further. After a couple of experiments, it was decided to mark a bad region of 10 megabytes: this is already enough for the utility to work quickly, but also small enough for the loss of disk space to become too large.
For clarity, the result of the work was recorded in the form of a picture: the white dots are good sectors, the red ones are bad ones, the gray ones are a bad area around the bad sectors. After nearly a day of work, the list of broken areas and a clear picture of their location were ready.
Here it is, this picture: Interesting, isn't it? The damaged areas turned out to be much larger than I imagined, but the intact areas are clearly more than half the disk space. It seems to be a pity to lose so much space, but I do not want to fence a dozen small partitions.
But after all, we already have the 21st century, the time of new technologies and disk arrays! So, it is possible to glue one disk array from these small partitions, create a file system on it and not know when it is burning.
According to the map of broken areas, a mega-team was created to create partitions. I used GPT, so as not to sweat about which ones should be primary and which extended:
The team worked for quite a long time (several minutes). Total turned out 24 (!) Partitions, each in its own size.
The next step is to make a single disc out of them. The perfectionist inside me suggested that it would be most appropriate to stir up some kind of RAID6 array that is resistant to failures. The practitioner, on the other hand, objected that, anyway, the partition that fell into the astral would have nothing to replace, so the usual JBOD would do - why would you lose space for nothing? Practitioner won:
Well that's all. It remains to create a file system and mount the animated disk:
The disk turned out to be quite roomy, 763 gigabytes (i.e., it was possible to use 83% of the disk capacity). In other words, only 17% of the initial terabyte went to heap:
$ df -h Filesystem Size Used Avail Use% Mounted on rootfs 9.2G 5.6G 3.2G 64% / ... /dev/md0 763G 101G 662G 14% /mnt/ext
The test set of trash movies poured onto the disk without errors. True, the write speed was small and swam from 6 to 25 megabytes per second. The reading was stable, with a speed of 25-30 Mb / s, that is, it was limited by an adapter connected to USB 2.0.
Of course, such a perversion cannot be used to store something important, but it may be useful as entertainment. When the question is, to disassemble the magnets on the disk or to suffer it first, my answer is: “Of course, suffer!”.