📜 ⬆️ ⬇️

HP StorageWorks P4000 G2 and SAN Cost Optimization

HP StorageWorks P4000 G2 In the light of the explosive development of server virtualization, the growth in the volume of data used and the growing requirements for business continuity, the usual patterns of using data storage systems no longer work in full and require adjustment. HP didn’t stay aside from the trend and quickly updated its StorageWorks P4000 mid-range storage system, previously known as LeftHand.

The new product received the G2 index (second generation) and several new features. A radical change of format, of course, did not happen - it is still the same cluster consisting of full-featured storage nodes. It's just that a successful product has been finalized in accordance with the changed market requirements.

In addition to purely technical issues, there is also a strategic subtext here. Against the background of decreased IT spending in the world's largest corporations, not to mention medium-sized companies, consumers need to offer a solution that can also save money in the process. It does not have to be particularly affordable at the initial purchase stage, but the cost of using (TCO) and obtaining additional services just need to be made attractive. It was this postulate that they tried to implement in the new system.

The main trump cards of G2 are easy integration into the storage network, plus reduced system usage costs. This is achieved in different ways. First of all, even the basic configuration has rich functionality - synchronous and asynchronous replication, Thin Provisioning, Snapshot, etc. You can only purchase it, and as you grow, add new nodes to the network. At the same time, while adding new equipment, updating software or firmware, all applications continue to work. This is an important point to ensure business continuity, and indeed, the ease of expanding the system.
It is announced that the P4000 G2 is released with an eye on server virtualization, work with databases, e-mail and business applications. In principle, virtually any storage system can be used for these tasks, but G2 comes to the forefront thanks to the remarkable scalability and dynamic allocation of disk space. After all, each node contains not only disks, but also a processor, memory, network ports. Therefore, adding a new node increases not only the capacity, but also the performance and fault tolerance of the entire cluster.

The experts interviewed by us expressed the opinion that serious savings from using G2 compared with earlier systems is quite possible. And the stated HP savings of 25% of the price per gigabyte of data is quite real. But, as usual, the most important is in the details. Nobody hides that such savings are possible on the scale of the corporation, where whole pools are used from the P4000 G2. And within the company, even if it is rather large, the savings will not be so great.

But the system certainly looks quite curious. Those who work with the “Lefthand legacy” of the main bonuses mark network RAID with duplication of data across the entire storage network, and not just inside a single device. Therefore, even in the event of a single node failure, data access is not interrupted. And, of course, the ability to safely stuff 120 terabytes of 10 units can also be useful. Especially in tandem with a 10-flexible network.

Traditionally, serious technical support for HP along with the ease of integrating new systems into a storage area network can make another cost optimization item. Even if a whole regiment fails, replacing it with a new one is not difficult. And the data will “rise” from the SAN.

In general, opinions agreed that the system was not cosmetically updated and seemed to be a very worthy option for building a large corporate network of data storage. On such a scale, the lower total cost of ownership will best show itself.

Source: https://habr.com/ru/post/100416/

All Articles