📜 ⬆️ ⬇️

Upgrade the disk subsystem of the old server with a PCIe 1.0 bus - 2.0

Why the theme of this article is selected disk subsystem upgrade
It is clear that first of all you need, as a rule:

  1. Increase RAM. This is such an obvious move that I didn’t even consider it necessary to write about it in the main article.
  2. Supply additional processor (s) or replace both processors with the most productive versions supported by server sockets.

For older servers that have memory, processors can usually be found at bargain prices.

At some point, before any owner of their own server becomes a question - upgrade or a new server.

Since the price of a new server can now be measured in millions of rubles, many follow the upgrade path.
')
For a successful upgrade, it is very important to use compromises, so that for a small fee (relative to the price of a new server) we get a significant performance gain.

The article contains a list of server PCI-E 2.0 x8 SSDs, which have become much cheaper now, raid controllers with SSD caching support are listed and SATA III SSD tested on SATA II interface.

The most obvious way to upgrade the disk subsystem is to upgrade from HDD to SSD. This is true for both laptops and servers. On servers, perhaps, the only difference is that the SSD can be easily put together in a raid.

True, there are subtle points related to the fact that the SATA III ports on the old server may not be present and then you will have to replace or install the corresponding controller.

There are, of course, intermediate methods.

Caching on SSD.


In general, this method is well suited for databases, 1C, any random access. The speed really speeds up. For huge video surveillance files this method is useless.

LSI Raid Controllers (IBM, DELL, CISCO, Fujtsu)


Starting with the 92xx series, LSI has CacheCade 2.0 technology, which allows using almost any SATA SSD as a raid cache. Both for reading and writing. And even create a mirror of caching SSD.

With branded controllers all the more difficult. This is especially true for IBM. Keys and SSDs for CacheCade will have to be bought from IBM for big money, so it's easier to change the controller to LSI and buy a hardware key cheaply. Software keys are significantly more expensive than hardware.

Adaptec Raid Controllers


On Adaptec controllers, there is MaxCache technology, which also allows you to use SSD as a cache. We are interested in versions of controllers that end with the letter Q.

Q-controllers are capable of using almost any SSD, and not just the SSD supplied by Adaptec.


Article about caching on SSD on Habré (controllers and OS).

Software caching technology on SSD


I will not cover these technologies. Practically in any OS they are supported now. I remember that when using btrfs, it automatically forwards read requests to the device with the shortest queue - SSD.

SATA III SSD on SATA II interface


Since it is not always possible and money for a new controller, the question arises how well the SATA III SSD works on an outdated SATA II interface.

Let's do a little test. As a test subject, we will have an Intel S3710 400GB SATA III SSD.
Random Read, iopsAvg read latency, mSRandom Write, iopsAvg write latency, mSLinear read, MB / sLinear write, MB / s
SATA II21241213580four282235
SATA III680730.468613920.52514462

Commands used to test speed
fio --name LinRead --eta-newline=5s --filename=/dev/sda --rw=read --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting fio --name LinWrite --eta-newline=5s --filename=/dev/sda --rw=write --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting fio --name RandRead --eta-newline=5s --filename=/dev/sda --rw=randread --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --iodepth=32 --direct=1 --numjobs=4 --runtime=60 --group_reporting fio --name RandWrite --eta-newline=5s --filename=/dev/sda --rw=randwrite --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --iodepth=32 --direct=1 --numjobs=4 --runtime=60 --group_reporting 


As you can see, the difference in linear speed, IOPS, the delays are very decent, so it makes sense to use only the SATA III interface, and if not, then install the controller.

In fairness, I will say that in other experiments, the difference in the speed of random reading and writing turned out to be insignificant. Perhaps such a big difference in IOPS between SATA II and SATA III could have happened because I had some extremely unsuccessful SATA II controller or driver with some bugs.

However, the fact is that you need to check the speed of SATA II - suddenly you have the same brake controller. In this case, the transition to the SATA III controller is required.

PCIe SSD on PCI-e 2.0 or 1.0 bus


As you know, the fastest SSDs are PCI-e NVMe, which are not limited to SAS or SATA protocols.

However, when installing modern PCI-e SSDs, it is necessary to take into account the fact that most of them use only 4 PCI-e lines, usually PCI-e 3.0 or 3.1.

Now let's look at the PCI-e bus speed table.
PCI Express bandwidth, GB / s
Year
release
Version
PCI Express
Coding
Speed
transmission
Bandwidth on x lines
× 4× 8× 16
20021.08b / 10b
0.50 GB / s1.0 GB / s2.0 GB / s4.0 GB / s
20072.08b / 10b
1.0 GB / s2.0 GB / s4.0 GB / s8.0 GB / s
20103.0128b / 130b
1.97 GB / s3.94 GB / s7.88 GB / s15.8 GB / s
When installing a PCI 3.0 x4 SSD on a PCI-e 2.0 bus, it will work on the same number of lines, but at a significantly lower speed. There is a problem that the linear speeds of modern PCI-e SSDs exceed the throughput capacity of the PCI-e 2.0 bus and even more so PCI-e 1.0.

M.2 SSD and PCI-e adapter

There are some good upgrade options when we buy an adapter for $ 10 and put the M.2 SSD into the server, but again for good SSDs there will be a speeding up (especially on PCI-e 1.0), and M.2 SSDs are not always readily for server loads: high durability, power protection and stability of speed characteristics due to filling SLC-cache on low-cost models.

So this method can only be suitable for a server with a PCI-e 2.0 bus and busy with non-critical work.

PCI-E 2.0 x8 SSD


The most cost-effective upgrade is to use a PCI-E 2.0 x8 SSD for servers with a PCI-e 1.0 bus (bandwidth up to 2 GB / s) and PCI-e 2.0 (up to 4 GB / s).

Such SSD server servers can now be bought quite inexpensively on both various market places and online auctions, including in Russia.

I have compiled a table of such obsolete SSDs that will perfectly disperse your old server. At the end of the table, I added several SSDs with a PCI-E 3.0 x8 interface. Suddenly you are lucky and fall for a reasonable price.

TitleTbPBWPCI-E4k read iops, K4k write iops, Kread MB / swrite, MB / s
Fusion-io ioDrive II DUO MLC2.432.52.0 x848049030002500
SANDISK FUSION IOMEMORY SX350-13001.3four2.0 x822534528001300
SANDISK FUSION IOMEMORY PX600-13001.3sixteen2.0 x823537527001700
SANDISK FUSION IOMEMORY SX350-16001.65.52.0 x827037528001700
SanDisk Fusion ioMemory SX300-32003.2eleven2.0 x834538527002200
SanDisk Fusion ioMemory SX350-32003.2eleven2.0 x834538528002200
SANDISK FUSION IOMEMORY PX6002.6322.0 x835038527002200
Huawei ES3000 V21.68.762.0 x839527015501100
Huawei ES3000 V23.217.522.0 x877023031002200
EMC XtremSF2.22.0 x834011027001000
HGST Virident FlashMAX II2.2332.0 x835010327001000
HGST Virident SSD FlashMAX II4.810.12.0 x8269512600900
HGST Virident FlashMAX III2.27.12.0 x85315927001400
Dell Micron P420M1.49.22.0 x8750953300630
Micron P420M1.49.22.0 x8750953300630
HGST SN2601.625.103.0 x8120020061702200
HGST SN2603.217.523.0 x8120020061702200
Intel P36083.217.53.0 x88508045002600
Kingston DCP10003.22.783.0 x8100018068006000
Oracle F3203.2293.0 x875012055001800
Samsung PM17253.2293.0 x8100012060002000
Samsung PM1725a3.2293.0 x8100018062002600
Samsung PM1725b3.2183.0 x898018062002600

Of these SSDs stand out Fusion ioMemory. Fusion's scientific director was Steve Wozniak . Then this company for $ 1.2 billion bought SanDisk. At one time, they cost from $ 50,000 apiece. Now you can buy them for a few hundred dollars in a new state for a disk with a capacity of 1TB and above.

If you look at the table, you can see that they have a rather high number of IOPS per record, almost equal to the number of IOPS per reading. Given their current price, in my opinion, these SSDs should be paid attention to.

True, they have several features:

  1. They can not be bootable
  2. Need a driver to use. Drivers are practically under everything, but under the latest Linux versions they will have to be compiled.
  3. The optimal sector size is 4096 bytes. (512 is also supported)
  4. In the worst case scenario, a driver can consume quite a lot of RAM (with a sector size of 512 bytes)
  5. The speed of work depends on the speed of the processor, so it’s better to turn off energy-saving technologies. This is a plus and a minus, because with a powerful processor, the device can work even faster than indicated in the specifications
  6. Needs good cooling. For servers, this should not be a problem.
  7. Not recommended for ESXi, since ESXi prefers disks with a sector of 512N, and this may entail a large memory consumption by the driver.
  8. Branded versions of these SSDs are generally not supported by vendors up to the level of the latest SanDisk driver (March 2019)

I conducted tests of Fusion ioMemory in comparison with a fairly modern server SSD Intel P3700 PCI-E 3.0 x8 (the latter is 4 times more expensive than the Fusion of similar capacity). At the same time, you can see how badly the speed is due to the x4 tire.
Fusion PX600 1.3TB PCI-E 2.0 x8Intel P3700 1.6TB PCI-E 3.0 x4

Yes, the linear read speed is definitely killed by the Intel P3700. The passport should be 2800 MB / s, and we have 1469 MB / s. Although in general it can be said that with the PCI-e 2.0 bus, you can use PCI-E 3.0 x4 server SSDs if you can get them at a reasonable price.

findings


The disk subsystem of the old server with a PCI-E 1.0 or 2.0 bus can be crammed using SSDs that can utilize 8 PCI-E lanes that give bandwidth up to 4GB / s (PCI-E 2.0) or 2GB / s (PCI-E 1.0). The most cost-effective it is to make using obsolete PCI-E 2.0 SSD.

It is also easy to implement trade-off options associated with the purchase of a CacheCade key for LSI controllers or replacing an Adaptec controller with a Q-version.

Well, a completely banal way is to buy (raid) a SATA III controller so that SSDs work at full speed and take them all that requires speed.

Source: https://habr.com/ru/post/454222/


All Articles