
Why the theme of this article is selected disk subsystem upgradeIt is clear that first of all you need, as a rule:
- Increase RAM. This is such an obvious move that I didn’t even consider it necessary to write about it in the main article.
- Supply additional processor (s) or replace both processors with the most productive versions supported by server sockets.
For older servers that have memory, processors can usually be found at bargain prices.
At some point, before any owner of their own server becomes a question - upgrade or a new server.
Since the price of a new server can now be measured in millions of rubles, many follow the upgrade path.
')
For a successful upgrade, it is very important to use compromises, so that for a small fee (relative to the price of a new server) we get a significant performance gain.
The article contains a list of server PCI-E 2.0 x8 SSDs, which have become much cheaper now, raid controllers with SSD caching support are listed and SATA III SSD tested on SATA II interface.
The most obvious way to upgrade the disk subsystem is to upgrade from HDD to SSD. This is true for both laptops and servers. On servers, perhaps, the only difference is that the SSD can be easily put together in a raid.
True, there are subtle points related to the fact that the SATA III ports on the old server may not be present and then you will have to replace or install the corresponding controller.
There are, of course, intermediate methods.
Caching on SSD.
In general, this method is well suited for databases, 1C, any random access. The speed really speeds up. For huge video surveillance files this method is useless.
LSI Raid Controllers (IBM, DELL, CISCO, Fujtsu)
Starting with the 92xx series, LSI has CacheCade 2.0 technology, which allows using almost any SATA SSD as a raid cache. Both for reading and writing. And even create a mirror of caching SSD.
With branded controllers all the more difficult. This is especially true for IBM. Keys and SSDs for CacheCade will have to be bought from IBM for big money, so it's easier to change the controller to LSI and buy a hardware key cheaply. Software keys are significantly more expensive than hardware.
Adaptec Raid Controllers
On Adaptec controllers, there is MaxCache technology, which also allows you to use SSD as a cache. We are interested in versions of controllers that end with the letter Q.
Q-controllers are capable of using almost any SSD, and not just the SSD supplied by Adaptec.
- Starting from 5xxx, all controllers have Hybrid raid support. The essence of this technology is that reading is always done with an SSD, when there is a mirror of one of the disks in which the SSD is.
- 5xxxQ, for example 5805ZQ. These controllers support MaxCache 1.0. Only reading caching.
- 6xxQ, for example 6805Q. MaxCache 2.0. Caching read and write.
- 7xxQ, for example 7805Q. MaxCache 3.0. Caching read and write.
- 8xxQ for the purpose of the upgrade almost does not make sense to use because of high prices.
Article about caching on SSD on Habré (controllers and OS).Software caching technology on SSD
I will not cover these technologies. Practically in any OS they are supported now. I remember that when using btrfs, it automatically forwards read requests to the device with the shortest queue - SSD.
SATA III SSD on SATA II interface
Since it is not always possible and money for a new controller, the question arises how well the SATA III SSD works on an outdated SATA II interface.
Let's do a little test. As a test subject, we will have an Intel S3710 400GB SATA III SSD.
Commands used to test speedfio --name LinRead --eta-newline=5s --filename=/dev/sda --rw=read --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting fio --name LinWrite --eta-newline=5s --filename=/dev/sda --rw=write --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting fio --name RandRead --eta-newline=5s --filename=/dev/sda --rw=randread --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --iodepth=32 --direct=1 --numjobs=4 --runtime=60 --group_reporting fio --name RandWrite --eta-newline=5s --filename=/dev/sda --rw=randwrite --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --iodepth=32 --direct=1 --numjobs=4 --runtime=60 --group_reporting
As you can see, the difference in linear speed, IOPS, the delays are very decent, so it makes sense to use only the SATA III interface, and if not, then install the controller.
In fairness, I will say that in
other experiments, the difference in the speed of random reading and writing turned out to be insignificant. Perhaps such a big difference in IOPS between SATA II and SATA III could have happened because I had some extremely unsuccessful SATA II controller or driver with some bugs.
However, the fact is that you need to check the speed of SATA II - suddenly you have the same brake controller. In this case, the transition to the SATA III controller is required.
PCIe SSD on PCI-e 2.0 or 1.0 bus
As you know, the fastest SSDs are PCI-e NVMe, which are not limited to SAS or SATA protocols.
However, when installing modern PCI-e SSDs, it is necessary to take into account the fact that most of them use only 4 PCI-e lines, usually PCI-e 3.0 or 3.1.
Now let's look at the PCI-e bus speed table.
When installing a PCI 3.0 x4 SSD on a PCI-e 2.0 bus, it will work on the same number of lines, but at a significantly lower speed. There is a problem that the linear speeds of modern PCI-e SSDs exceed the throughput capacity of the PCI-e 2.0 bus and even more so PCI-e 1.0.
M.2 SSD and PCI-e adapter
There are some good upgrade options when we buy an adapter for $ 10 and put the M.2 SSD into the server, but again for good SSDs there will be a speeding up (especially on PCI-e 1.0), and M.2 SSDs are not always readily for server loads: high durability, power protection and stability of speed characteristics due to filling SLC-cache on low-cost models.
So this method can only be suitable for a server with a PCI-e 2.0 bus and busy with non-critical work.
PCI-E 2.0 x8 SSD
The most cost-effective upgrade is to use a PCI-E 2.0 x8 SSD for servers with a PCI-e 1.0 bus (bandwidth up to 2 GB / s) and PCI-e 2.0 (up to 4 GB / s).
Such SSD server servers can now be bought quite inexpensively on both various market places and online auctions, including in Russia.
I have compiled a table of such obsolete SSDs that will perfectly disperse your old server. At the end of the table, I added several SSDs with a PCI-E 3.0 x8 interface. Suddenly you are lucky and fall for a reasonable price.

Of these SSDs stand out Fusion ioMemory.
Fusion's scientific director was
Steve Wozniak . Then this company for $ 1.2 billion bought SanDisk. At one time, they cost from $ 50,000 apiece. Now you can buy them for a few hundred dollars in a new state for a disk with a capacity of 1TB and above.
If you look at the table, you can see that they have a rather high number of IOPS per record, almost equal to the number of IOPS per reading. Given their current price, in my opinion, these SSDs should be paid attention to.
True, they have several features:
- They can not be bootable
- Need a driver to use. Drivers are practically under everything, but under the latest Linux versions they will have to be compiled.
- The optimal sector size is 4096 bytes. (512 is also supported)
- In the worst case scenario, a driver can consume quite a lot of RAM (with a sector size of 512 bytes)
- The speed of work depends on the speed of the processor, so it’s better to turn off energy-saving technologies. This is a plus and a minus, because with a powerful processor, the device can work even faster than indicated in the specifications
- Needs good cooling. For servers, this should not be a problem.
- Not recommended for ESXi, since ESXi prefers disks with a sector of 512N, and this may entail a large memory consumption by the driver.
- Branded versions of these SSDs are generally not supported by vendors up to the level of the latest SanDisk driver (March 2019)
I conducted tests of Fusion ioMemory in comparison with a fairly modern server SSD Intel P3700 PCI-E 3.0 x8 (the latter is 4 times more expensive than the Fusion of similar capacity). At the same time, you can see how badly the speed is due to the x4 tire.
Yes, the linear read speed is definitely killed by the Intel P3700. The passport should be 2800 MB / s, and we have 1469 MB / s. Although in general it can be said that with the PCI-e 2.0 bus, you can use PCI-E 3.0 x4 server SSDs if you can get them at a reasonable price.
findings
The disk subsystem of the old server with a PCI-E 1.0 or 2.0 bus can be crammed using SSDs that can utilize 8 PCI-E lanes that give bandwidth up to 4GB / s (PCI-E 2.0) or 2GB / s (PCI-E 1.0). The most cost-effective it is to make using obsolete PCI-E 2.0 SSD.
It is also easy to implement trade-off options associated with the purchase of a CacheCade key for LSI controllers or replacing an Adaptec controller with a Q-version.
Well, a completely banal way is to buy (raid) a SATA III controller so that SSDs work at full speed and take them all that requires speed.