📜 ⬆️ ⬇️

Monitoring and checking SSD status on Linux

Hello again. A translation of the following article has been prepared especially for students on the Linux Administrator course. Go!



What is SMART?


SMART (stands for Self-Monitoring, Analysis, and Reporting Technology) is a technology embedded in drives such as hard drives or SSDs. Its main task is to monitor the condition.
')
In fact, SMART controls several parameters during normal disk operations. It monitors such parameters as the number of read errors, disk startup time, and even the state of the environment. In addition, SMART can also run tests using the drive.

Ideally, SMART will predict predictable failures, such as failures caused by mechanical wear or deterioration of the disc surface, as well as unpredictable failures caused by some unexpected defect. Because disks usually do not crash abruptly, SMART helps the operating system or system administrator identify those disks that will soon fail so that they can be replaced and data loss can be avoided.

What does not apply to SMART?


All this, of course, is cool. However, SMART is not a crystal ball. He cannot predict failure with absolute probability and cannot guarantee that the drive will not fail without warning. In the best case, SMART should be used to assess the likelihood of a breakdown.

Given the statistical nature of failure forecasting, SMART technology is particularly interested in companies that use a large number of storage devices. In order to find out how accurately SMART can predict failures and report on the need to replace disks in data centers or server mainframes, even special studies have been conducted.

In 2016, Microsoft and the University of Pennsylvania conducted a study related to SSDs .

According to this study, some SMART attributes are considered good indicators of the inevitability of failure. In particular, the article mentions:

Realloc sectors counter :

Despite the fact that the underlying technologies are radically different, this indicator remains in demand both in the world of SSD and in the world of hard drives. It is worth noting that due to the features of the wear balancing algorithms used in SSDs, when several sectors fail, then with high probability we can assume that even more will fail soon.

Errors in the Program / Erase (P / E) loop:

This is a sign of problems with the main flash memory equipment due to the fact that the disk cannot delete data from the block or save it there. The fact is that the production process is imperfect, so the appearance of such errors can be expected. However, flash memory has a limited number of write / delete cycles. For this reason, a sudden increase in the number of events may indicate that the disk is reaching its limit, and it is expected that other memory cells will also begin to fail.

CRC and fatal errors (“Data Error”):

Events of this type can be caused by storage errors or problems with the internal communication channel of the drive. This indicator takes into account both the corrected errors (reported to the host system without any problems) and the uncorrected errors (due to which the disk locks, which informed the host system that it could not be read). In other words, the corrected errors are invisible to the operating system, however, they affect the performance of the drive, increasing the likelihood of reassigning the sector.

SATA downshift count:

Due to temporary interference, problems with the communication channel between the drive and the host, or due to internal problems with the drive, the SATA interface may switch to a lower signal transmission speed. Lowering the connection speed below the nominal level has an obvious effect on disk performance. Thus, this indicator is most significant, especially when it correlates with the presence of one or more previous indicators.

According to the study, 62% of failed SSDs showed the presence of at least one of the above symptoms. On the other hand, we can say that 38% of the studied drives broke down without an indication of these symptoms. The studies did not mention whether there were any other reports of rejection of SMART for other "symptoms." For this reason, you cannot directly correlate these values ​​with failure without warning in 36% of cases from an article from Google.

A study by Microsoft and the University of Pennsylvania did not disclose the model of the test disc, however, according to the authors, most discs have come from the same supplier for several generations.

The study also noted significant differences in reliability between different models. For example, the “worst” model studied shows a twenty percent failure rate 9 months after the first reassignment error and up to 36 percent of failures within 9 months after the first occurrence of data errors. The "worst" model was called the older generation of discs considered in the article.

On the other hand, with the same symptoms that are given above, the new generation drives failed in 3% and 20% in accordance with the same errors. It is difficult to say whether these figures can be explained by an improvement in the design of the drive and the production process, or whether the effect of obsolescence plays a role here.

The most interesting thing that is mentioned in the article (I already wrote about this earlier) is that an increase in the number of registered errors can happen as an alarming indicator:

“There is a greater likelihood of symptoms preceding the failure of the SSD, which actively manifest themselves and progress rapidly, greatly reducing the drive’s life time to several months.”

In other words, one random error reported by SMART should definitely not be considered a signal of imminent failure. However, when a healthy SSD starts reporting more and more errors, you should expect a short or medium term failure.

But how do you know what state your SSD is in now? To satisfy your curiosity, or out of a desire to start closely monitoring your drives, you can use the smartctl monitoring smartctl .

Using smartctl to monitor the status of your SSD on Linux


To monitor the SMART status of your drive, I suggest using the smartctl tool, which is part of the smartmontool package (at least on Debian / Ubuntu).

 sudo apt install smartmontools 

smartctl is a command line tool , but it especially helps in cases where you need to automate the collection of data, for example, from your servers.

The first step in using smartctl is to check to see if your drive has SMART and is supported by the tool:

 sh$ sudo smartctl -i /dev/sdb smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-6-amd64] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Seagate Momentus 7200.4 Device Model: ST9500420AS Serial Number: 5VJAS7FL LU WWN Device Id: 5 000c50 02fa0b800 Firmware Version: D005SDM1 User Capacity: 500,107,862,016 bytes [500 GB] Sector Size: 512 bytes logical/physical Rotation Rate: 7200 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Mon Mar 12 15:54:43 2018 CET SMART support is: Available - device has SMART capability. SMART support is: Enabled 

As you can see, my laptop’s internal hard drive really supports SMART and it is on. So, how to get SMART status now? Are there any fixed errors?

Reporting “about all SMART disk information” is the -a option:

 sh$ sudo smartctl -i -a /dev/sdb smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-6-amd64] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Seagate Momentus 7200.4 Device Model: ST9500420AS Serial Number: 5VJAS7FL LU WWN Device Id: 5 000c50 02fa0b800 Firmware Version: D005SDM1 User Capacity: 500,107,862,016 bytes [500 GB] Sector Size: 512 bytes logical/physical Rotation Rate: 7200 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Mon Mar 12 15:56:58 2018 CET SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED See vendor-specific Attribute list for marginal Attributes. General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 0) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 110) minutes. Conveyance self-test routine recommended polling time: ( 3) minutes. SCT capabilities: (0x103f) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 111 099 006 Pre-fail Always - 29694249 3 Spin_Up_Time 0x0003 100 098 085 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 095 095 020 Old_age Always - 5413 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 3 7 Seek_Error_Rate 0x000f 071 060 030 Pre-fail Always - 51710773327 9 Power_On_Hours 0x0032 070 070 000 Old_age Always - 26423 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 096 037 020 Old_age Always - 4836 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 072 072 000 Old_age Always - 28 188 Command_Timeout 0x0032 100 096 000 Old_age Always - 4295033738 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 056 042 045 Old_age Always In_the_past 44 (Min/Max 21/44 #22) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 184 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 104 193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always - 395415 194 Temperature_Celsius 0x0022 044 058 000 Old_age Always - 44 (0 13 0 0 0) 195 Hardware_ECC_Recovered 0x001a 050 045 000 Old_age Always - 29694249 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 1 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 1 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 25131 (246 202 0) 241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 3028413736 242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 1613088055 254 Free_Fall_Sensor 0x0032 100 100 000 Old_age Always - 0 SMART Error Log Version: 1 ATA Error Count: 3 CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 3 occurred at disk power-on lifetime: 21171 hours (882 days + 3 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 00 08 ff ff ff 4f 00 00:45:12.580 READ FPDMA QUEUED 60 00 08 ff ff ff 4f 00 00:45:12.580 READ FPDMA QUEUED 60 00 08 ff ff ff 4f 00 00:45:12.579 READ FPDMA QUEUED 60 00 08 ff ff ff 4f 00 00:45:12.571 READ FPDMA QUEUED 60 00 20 ff ff ff 4f 00 00:45:12.543 READ FPDMA QUEUED Error 2 occurred at disk power-on lifetime: 21171 hours (882 days + 3 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 00 00 ff ff ff 4f 00 00:45:09.456 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 00:45:09.451 READ FPDMA QUEUED 61 00 08 ff ff ff 4f 00 00:45:09.450 WRITE FPDMA QUEUED 60 00 00 ff ff ff 4f 00 00:45:08.878 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 00:45:08.856 READ FPDMA QUEUED Error 1 occurred at disk power-on lifetime: 21131 hours (880 days + 11 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 00 00 ff ff ff 4f 00 05:52:18.809 READ FPDMA QUEUED 61 00 00 7e fb 31 45 00 05:52:18.806 WRITE FPDMA QUEUED 60 00 00 ff ff ff 4f 00 05:52:18.571 READ FPDMA QUEUED ea 00 00 00 00 00 a0 00 05:52:18.529 FLUSH CACHE EXT 61 00 08 ff ff ff 4f 00 05:52:18.527 WRITE FPDMA QUEUED SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 10904 - # 2 Short offline Completed without error 00% 12 - # 3 Short offline Completed without error 00% 0 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. 

Understanding the output of smartctl commands


The output is a lot of information that is not always easy to understand. The most interesting part is probably the part labeled “Vendor Specific SMART Attributes with Thresholds”. It reports various statistics collected by the SMART device and allows you to compare these values ​​(current or worst for all time) with a certain threshold defined by the supplier.

For example, here are my reports on reassigned sectors on disk:

 ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 3 

You may notice the “Pre-fail” attribute. It means the value is abnormal. Thus, if the value exceeds the threshold, the probability of failure is high. Another category of "Old_age" is used for attributes that correspond to the values ​​of "normal wear".

The last field (here with a value of "3") corresponds to the original attribute value that the drive reports. Usually this number has a physical meaning. Here is the actual number of reassigned sectors. For other attributes, this may be the temperature in degrees Celsius, the time in hours or minutes, or the number of times that a certain condition has been met for a disk.

In addition to the initial value, a SMART-enabled drive must report “normalized values” (field values, worst and threshold). These values ​​are normalized in the range of 1-254 (0-255 for threshold values). The firmware of the disk performs this normalization using some internal algorithm. In addition, different manufacturers can normalize the same attribute in different ways. Most values ​​are presented as a percentage, and the higher the better, but this is not always the case. When the parameter is lower than or equal to the threshold value specified by the manufacturer, the disk is considered faulty in terms of this attribute. Keeping in mind all the instructions from the first part of the article, when the attribute showing the “pre-fail” value still failed, it is most likely that the disk will soon fail.

As a second example, take the “seek error rate” :

 ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 7 Seek_Error_Rate 0x000f 071 060 030 Pre-fail Always - 51710773327 

In fact (and this is the main problem with SMART reporting), only the provider understands the exact value of the fields of each attribute. In my case, Seagate uses a logarithmic scale to normalize the value. Thus, “71” means approximately one error per 10 million requests (10 to the power of 7.1). It's funny that the worst indicator of all time was one error per 1 million requests (10 to the 6th degree).

If I understand correctly, this means that the heads of my disk are now located more accurately than before. I did not closely monitor this disc, so I analyze the data obtained very subjectively. Perhaps the drive just had to be a little “run-in” since it was put into operation? Or maybe this is a consequence of the mechanical wear of the parts and, therefore, there is now less friction? In any case, whatever the reason, this value is more a measure of performance than an early warning of an error. So it doesn’t bother me much.

In addition to the above and three extremely suspicious errors recorded about six months ago, this disk is in surprisingly good condition (according to SMART) for the laptop’s stock disk that has worked for more than 1,100 days (26,423 hours).

 ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 9 Power_On_Hours 0x0032 070 070 000 Old_age Always - 26423 

Out of curiosity, I conducted the same test on a much newer laptop equipped with SSD:

 sh$ sudo smartctl -i /dev/sdb smartctl 6.5 2016-01-24 r4214 [x86_64-linux-4.10.0-32-generic] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: TOSHIBA THNSNK256GVN8 Serial Number: 17FS131LTNLV LU WWN Device Id: 5 00080d 9109b2ceb Firmware Version: K8XA4103 User Capacity: 256 060 514 304 bytes [256 GB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: Solid State Device Form Factor: M.2 Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-3 (minor revision not indicated) SATA Version is: SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Tue Mar 13 01:03:23 2018 CET SMART support is: Available - device has SMART capability. SMART support is: Enabled 

The first thing that catches your eye is that despite the presence of SMART, the device is not in the smartctl database. But this does not prevent the tool from collecting data from the SSD, however, it will not be able to report the exact values ​​of the various attributes specific to the provider:

 sh$ sudo smartctl -a /dev/sdb smartctl 6.5 2016-01-24 r4214 [x86_64-linux-4.10.0-32-generic] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 120) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 11) minutes. SCT capabilities: (0x003d) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000a 100 100 000 Old_age Always - 0 2 Throughput_Performance 0x0005 100 100 050 Pre-fail Offline - 0 3 Spin_Up_Time 0x0007 100 100 050 Pre-fail Always - 0 5 Reallocated_Sector_Ct 0x0013 100 100 050 Pre-fail Always - 0 7 Unknown_SSD_Attribute 0x000b 100 100 050 Pre-fail Always - 0 8 Unknown_SSD_Attribute 0x0005 100 100 050 Pre-fail Offline - 0 9 Power_On_Hours 0x0012 100 100 000 Old_age Always - 171 10 Unknown_SSD_Attribute 0x0013 100 100 050 Pre-fail Always - 0 12 Power_Cycle_Count 0x0012 100 100 000 Old_age Always - 105 166 Unknown_Attribute 0x0012 100 100 000 Old_age Always - 0 167 Unknown_Attribute 0x0022 100 100 000 Old_age Always - 0 168 Unknown_Attribute 0x0012 100 100 000 Old_age Always - 0 169 Unknown_Attribute 0x0013 100 100 010 Pre-fail Always - 100 170 Unknown_Attribute 0x0013 100 100 010 Pre-fail Always - 0 173 Unknown_Attribute 0x0012 200 200 000 Old_age Always - 0 175 Program_Fail_Count_Chip 0x0013 100 100 010 Pre-fail Always - 0 192 Power-Off_Retract_Count 0x0012 100 100 000 Old_age Always - 18 194 Temperature_Celsius 0x0023 063 032 020 Pre-fail Always - 37 (Min/Max 11/68) 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 240 Unknown_SSD_Attribute 0x0013 100 100 050 Pre-fail Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. 

Above you see the output of a brand new SSD. The data is understandable even if there is no normalization or meta-information for the data of a particular provider, as in my case with “Unknown_SSD_Attribute.” I can only hope that in future versions of smartctl data on this disk model will appear in the database, and I will be able to better identify potential problems .

Test your SSD on Linux with smartctl


So far, we have reviewed the data collected during normal drive operation. However, the SMART protocol also supports several commands for offline testing to run on-demand diagnostics.

Offline testing may be performed during normal disk operations, unless otherwise specified. Because the test and host I / O requests will compete, disk performance will drop during the test. The SMART specification defines several types of stand-alone testing:

Short stand-alone testing ( -t short )
Such a test will check the electrical and mechanical performance, as well as disk read performance. Short autonomous testing usually takes only a few minutes (usually from 2 to 10).

Extended standalone testing ( -t long )
This test takes almost twice as much time. This is usually just a more detailed version of a short standalone test. In addition, this test will scan the entire surface of the disk for data errors without a time limit. The duration of the test will be proportional to the size of the disk.

Standalone Shipping Testing ( -t conveyance )
This test kit is proposed as a relatively quick way to check for possible damage that occurred during transportation of the device.

Here are examples taken from the same drives that were above. I suggest you guess where which one:

 sh$ sudo smartctl -t short /dev/sdb smartctl 6.5 2016-01-24 r4214 [x86_64-linux-4.10.0-32-generic] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org === START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION === Sending command: "Execute SMART Short self-test routine immediately in off-line mode". Drive command "Execute SMART Short self-test routine immediately in off-line mode" successful. Testing has begun. Please wait 2 minutes for test to complete. Test will complete after Mon Mar 12 18:06:17 2018 Use smartctl -X to abort test. 

A check is in progress. Let's wait for completion to see the result:

 sh$ sudo sh -c 'sleep 120 && smartctl -l selftest /dev/sdb' smartctl 6.5 2016-01-24 r4214 [x86_64-linux-4.10.0-32-generic] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org === START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 171 - 

Let's run the same test on another drive:

 sh$ sudo smartctl -t short /dev/sdb smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-6-amd64] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org === START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION === Sending command: "Execute SMART Short self-test routine immediately in off-line mode". Drive command "Execute SMART Short self-test routine immediately in off-line mode" successful. Testing has begun. Please wait 2 minutes for test to complete. Test will complete after Mon Mar 12 21:59:39 2018 Use smartctl -X to abort test. 

And again, we’ll send it to sleep for two minutes and see the result:

 sh$ sudo sh -c 'sleep 120 && smartctl -l selftest /dev/sdb' smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-6-amd64] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org === START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 26429 - # 2 Short offline Completed without error 00% 10904 - # 3 Short offline Completed without error 00% 12 - # 4 Short offline Completed without error 00% 0 - 

Interestingly, in this case we see that the manufacturers of the disk and the computer seem to have already tested the disk (at a life time of 0 hours and 12 hours). I myself was definitely much less concerned about the condition of the drive than they are. So, since I already showed quick tests, I’ll run the advanced one too to see how this happens.

 sh$ sudo smartctl -t long /dev/sdb smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-6-amd64] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org === START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION === Sending command: "Execute SMART Extended self-test routine immediately in off-line mode". Drive command "Execute SMART Extended self-test routine immediately in off-line mode" successful. Testing has begun. Please wait 110 minutes for test to complete. Test will complete after Tue Mar 13 00:09:08 2018 Use smartctl -X to abort test. 

Apparently, this time it will take much longer to wait than during a short test. So let's see:

 sh$ sudo bash -c 'sleep $((110*60)) && smartctl -l selftest /dev/sdb' [sudo] password for sylvain: smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-6-amd64] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org === START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed: read failure 20% 26430 810665229 # 2 Short offline Completed without error 00% 26429 - # 3 Short offline Completed without error 00% 10904 - # 4 Short offline Completed without error 00% 12 - # 5 Short offline Completed without error 00% 0 - 

In the last test, note the difference in the results obtained with the short and extended tests, even if they were performed one after the other. Well, maybe this drive is not in such a good condition! I note that the test stopped after the first reading error. Therefore, if you want to get comprehensive information about all reading errors, you will have to continue the test after each error. I urge you to take a look at one very well written smartctl (8) manual page for more information on the -t select , N-max and -t select to be able to do this:

 sh$ sudo smartctl -t select,810665230-max /dev/sdb smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-6-amd64] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org === START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION === Sending command: "Execute SMART Selective self-test routine immediately in off-line mode". SPAN STARTING_LBA ENDING_LBA 0 810665230 976773167 Drive command "Execute SMART Selective self-test routine immediately in off-line mode" successful. Testing has begun. 

 smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-6-amd64] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org === START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Selective offline Completed without error 00% 26432 - # 2 Extended offline Completed: read failure 20% 26430 810665229 # 3 Short offline Completed without error 00% 26429 - # 4 Short offline Completed without error 00% 10904 - # 5 Short offline Completed without error 00% 12 - # 6 Short offline Completed without error 00% 0 - 

Conclusion


Definitely, SMART is exactly the technology that should be added to your toolkit for monitoring the health of your server drives.You should also take a look at SMART Disk Monitoring Daemon smartd (8) , which can help you automate monitoring using syslog reports .

Given the statistical nature of failure prediction, I'm not sure aggressive SMART monitoring will be very useful on personal computers. Remember that no matter what the drive is, one day it will fail anyway - and, as we saw earlier, in one third of cases it will do so without warning. Therefore, nothing will ensure the integrity of your data better than RAID technology and backups!

See you on the course, friends!

Source: https://habr.com/ru/post/461929/


All Articles