📜 ⬆️ ⬇️

Does disk space speed up your computer?

This is a translation of the answer to the question about the effect of free disk space on performance from the site superuser.com - approx. translator



From the author: Looks like I accidentally wrote a whole book. Pour yourself a cup of coffee before reading.
')
Does disk space speed up your computer?

Freeing up disk space does not speed up the computer, at least not by itself. This is a really common myth. This myth is so common because filling up your hard drive often happens concurrently with other processes that traditionally can slow down your computer. SSD performance may decline as it is filled , but this is a relatively new problem inherent in SSD, and, in fact, hardly noticeable to ordinary users. In general, the lack of free space is just a red rag for the bull ( distracts attention - comment of the translator ).

Note author: * "Slowdown" - a term with a very broad interpretation. Here I use it for I / O related processes (i.e., if your computer is purely computing, the contents of the disk have no effect), or related to the processor and competing with processes that consume a lot of processor resources (i.e., antivirus scanning a large number of files)


For example, such phenomena as:


What was written above illustrates another reason for the prevalence of this myth: although the exhaustion of free space is not directly a cause of slowing down, uninstalling various applications, deleting indexed and scanned content, etc. sometimes (but not always, such cases are outside the scope of this text) leads to an increase in productivity for reasons not related to the amount of free space. At the same time, disk space is naturally released. Consequently, a false connection between “more free space” and “fast computer” also appears here.

See : if your computer is slow because of the large number of installed programs, etc., and you clone your hard disk exactly onto a larger hard disk, then expand the partitions to get more free space, the computer does not will be on the wave of hands faster. The same programs are loaded, the same files are fragmented in the same way, the same indexing service works, nothing changes, despite the increase in free space.

Is this somehow related to finding a place to put files?


No, not connected. There are two important points here:
  1. Your hard drive is not looking for a place to put files . The hard drive is stupid. He is nothing. This is a large block of addressable storage that blindly obeys the operating system in terms of placement. Modern disks are equipped with sophisticated caching and buffering mechanisms designed to predict operating system requests based on people's experience (some disks even know about file systems). But, in fact, the disk should be thought of as a big silly brick storing data, sometimes with performance enhancing features.
  2. Your operating system is also not looking for a place to put it. There is no "search . " Great efforts have been made to solve this problem, because it is critical to file system performance. The data is located on your disk as determined by the file system , for example, FAT32 (old computers with DOS and Windows), NTFS (new Windows systems), HFS + (Mac), ext4 (some Linux systems) and many others. Even the concept of a “file” or “directory” ( “folders” - translator's note ) is just the fruit of a typical file system: hard drives are not aware of such animals as “files”. Details are outside of this text. However, in fact, all common file systems contain a way to track free disk space and therefore, “searching” for free space, under normal circumstances (i.e., in the normal state of the file system), is not necessary. Examples:

    • NTFS contains the master file table which includes special files (for example, $ Bitmap) and a lot of metadata describing the disk. In fact, it tracks subsequent free blocks, so that files can be written to disk without having to scan the disk each time.
    • Another example, ext4 has an entity called a “bitmap allocator,” an improvement over ext2 and ext3, which helps directly determine the position of free blocks, instead of scanning the list of free blocks. Ext4 also supports “deferred allocation”, which is essentially the data buffering of the operating system into RAM before writing to disk, in order to make the best placement decision to reduce fragmentation.
    • Many other examples.



Maybe it's about moving files back and forth to allocate a sufficiently long uninterrupted place while saving?


No, this is not happening. At least in any of the file systems I know. Files are simply fragmented.

The process of “moving files back and forth to isolate a long continuous block” is called defragmentation . This does not happen when writing files. This happens when you run the disk defragmenter. at least on new Windows systems, this happens automatically on a schedule, but writing a file is never a reason to start this process.

Avoiding the need to move files in this way is key to the performance of file systems, and the reason why fragmentation occurs, and defragmentation is a separate step.

How much free space should be left on the disk?


This is a more complicated question, and I have already written so much.

Basic rules to follow:


Personally, I usually buy a new larger disk, when I have about 20-25% of free space. This is not related to performance, it's just when I get to this point - this means that the place will end soon, which means it's time to buy a new disk.

More important than tracking the free space is to check that the planned defragmentation is enabled where it is necessary (not on the SSD), so that you will never come to a moment when it is large enough to have a noticeable impact.


Afterword


There is one more thing to be mentioned. One of the other answers to this question mentions that half-duplex SATA mode does not allow reading and writing at the same time. While this is true, it is a strong oversimplification and for the most part is not related to the performance issues discussed here. In fact, this simply means that data cannot be transmitted across the wire simultaneously in two directions. However, the SATA specification includes tiny maximum block sizes (I think about 8kB per block during transmission over the wire), queues of read and write operations, etc., and nothing prevents you from writing data to the buffer while reading and similar intersecting operations are being performed.

Any blocking that may occur will be associated with competition for physical resources, which is usually compensated for by large amounts of cache. Duplex SATA has almost nothing to do with this case.

Source: https://habr.com/ru/post/256231/


All Articles