In the
previous section, we started to get acquainted with the new Fluid Data technology, designed to improve the lives of those who deal with really big data. Some, but not all, of the advantages of this solution were also analyzed using the example of Dell Compellent storage systems. Well, not shelving them, we suggest continuing acquaintance.
Sir, protect yourself! And back up!
Admins are divided into two types: those that do not make backups, and those that already do. You probably can wrap the Earth a couple of times with the beard of this joke, but it will not lose its relevance anyway. Today, the continuity of a business process for a company is sometimes very critical, which means that a good storage system must offer a solution for possible problems. What “troubles” can iron expect? At a minimum, power outages, human factors (user errors), viruses, etc. However, traditional approaches to protecting and restoring data over time have accumulated a decent amount of “atavism”: they began to require too much disk space, while not shining with reliability and speed. For example, the generation of snapshots protects data quite effectively (of course, provided that the time between two consecutive snapshots is sufficiently short). However, this often requires the use of a full mirror copy and clones of the entire volume. But there is also a RAID, which does not facilitate the task at all. Add to this the ineffective allocation of capacities, which we talked about in the previous article, and get a stalemate: you can take pictures, but there is simply no place to store a large number of them. And here either you have to increase the time between two consecutive recovery points, or be able to roll back for a short period of time. Obviously, both approaches are not a cake.
Therefore, the solution used in the Dell Compellent systems, called Data Instant Replay, is quite natural and logical. Its principle is somewhat similar to the behavior of most online games - only information about changes in the game world is streamed to the server, not video / audio / chat / mat ... As applied to data backups, this means giving up full mirror copies and subsequent volume clones. records only data changes since the last snapshot. Such an approach will inevitably cause savings in disk space, and in combination with Dynamic Capacity, double
damage profit.
')

Well, having received such a profitable tool and minimizing the time between two snapshots, care must be taken to automate the process. Do not poke the same button every 15 seconds? This, so to speak, smacks of Foxconn'om However, this trivial task is solved by setting up a no less trivial built-in scheduler that allows you to automatically start the "replication" process.
It remains only to cite some example from life (the picture does not count), proving the "usefulness" of the technology. Imagine that the task is to test new applications or services that the company may be planning to implement in the future. Where is the guarantee that during the test, everything will go "without a hitch, without a hitch"? With Data Instant Replay, this can be done without any risk of data loss or corruption.
Admins replicated, replicated, but did not lash out
And now I would like to consider the situation characteristic of companies with several large representative offices, which are geographically remote from each other, but which need to have access to the same information. At the same time, the information itself should be relevant for everyone and be promptly updated in case of any changes being made. There are several solutions. For example, you can run everything "in the clouds." But what if you need to back up data to a remote site?
Now this approach, let's say, is not particularly popular due to its cost and complexity in the organization. For example, sometimes it requires the presence of identical equipment at both sites. Moreover, the organization of high-speed communication channel to synchronize this “good” will bring additional expenses.
But who said that these problems cannot be solved at a reasonable price? Dell didn't say exactly. And developed the technology of "thin" replication (orig. Thin Replication) called Remote Instant Replay. Its ideology is similar to the above described method of creating backups: after the initial synchronization of sites, later on, only data changes run through communication channels.
The benefits are obvious:
• reducing equipment costs;
• reduction of costs and bandwidth requirements of communication channels;
• increase the speed of information recovery.
Also in the advantages you should definitely write down the relative “omnivorous nature” of the technology, that is, you can use inexpensive SAS and SATA drives on the backup sites and not go broke into the channel bandwidth. And even more efficiently, a Fiber Channel-to-iSCSI converter built into Dell Compellent systems can be used to get rid of the native IP network without protocol conversion.
Happiness does not happen much
Finally, we will dwell on another pressing problem that sooner or later comprehends any successful business. We are talking about the inevitable increase in the volume of information. For storage, this problem translates into a scalability requirement. The trouble is that solution providers are not fools either, and they want to make money. A simple analogy: the manufacturer has the ability to produce 64 GB flash drives for a reasonable price. But why should he do it if at the moment at the "peak of popularity" 8 GB? And then where to put flash drives for 1, 2 and 4 GB? Obviously, it is necessary to introduce the consumer into the “better life” gradually - it is beneficial for both producers and sellers (it is not beneficial only for the consumer himself). Therefore, solutions will first appear at 16, then at 32, and only then the coveted 64 GB.
How does this relate to storage? In the most direct way. In the "classic" case, manufacturers artificially limit the capabilities of their decisions, by designing them in advance with an eye to rapid obsolescence.
In Dell, believe it or not, we decided to move away from this principle. Dell Compellent storage systems are designed for a long life cycle. The platform itself with the growth of "appetites" can be scaled from two to one thousand terabytes of capacity. At the same time, variable combinations of both server interfaces FC and iSCSI, and used disks (SSD, FC, SAS and SATA) are allowed. You can even install SAS drives of different capacities and speeds in one disk shelf.
On the other hand, special attention is paid to failsafe protection. The cluster-integrated controllers, each equipped with redundant fans and power supplies, together provide optimal system performance. However, each controller, independently of the other, is connected to disk shelves and disks to eliminate a single point of failure. And virtualizing controller ports and duplicating the I / O channel between servers and disks eliminates the need to purchase additional software.
Afterword
You shouldn’t put up with the “predatory” approaches of traditional storage systems that offer solutions that are deliberately doomed to obsolescence and limited in terms of compatibility. Dell Compellent provides dynamic virtualized storage that adapts easily to constant changes.
The main features can be written:
• optimization of the data storage process, which allows efficient allocation of disk space;
• built-in intelligent data management functions and the ability to automate them;
• improved snapshot technology;
• initial system design for a long-term period without reference to proprietary technologies and specific manufacturers.
What is there to add?
Do you want to do well - do it with Dell.