📜 ⬆️ ⬇️

NetApp HCI is the new generation hyperconvergent system for working with data

The development of data storage and management systems has come a long way of development, being critical for any corporate IT solution. Today, the most advanced are hyperconvergent systems, which have a number of advantages over those used today and outdated inherited systems. They are cheaper, easier to manage, easy to scale, and ensure that their resources accurately match the needs of the enterprise.



This article provides a comparative description of traditional, convergent, and hyper-convergent systems associated with data storage and processing. Scale-out options for similar enterprise systems and Scale-Out architecture are considered. The description and characteristics of the new generation hyperconvergent system for working with NetApp HCI data are presented.

Convergent and traditional systems
')
Converged systems ─ are the result of natural progress, a departure from the traditional IT infrastructure, which has always been associated with the creation of separate and unrelated “bunkers” for data storage and processing.

For the inherited IT environment, as a rule, separate administrative groups (teams of specialists) were created for storage systems, for servers and for network support. For example, a storage system group was engaged in the purchase, provision, and support of a storage infrastructure. She also maintained relationships with storage hardware vendors. The same was true for server groups and the network.

A convergent system combines two or more of these components of an IT infrastructure as a pre-engineered solution. The best solutions of this class combine all three components that are closely interconnected by the corresponding software.

The clear advantage of this solution is a relatively simple design for a complex IT infrastructure. The idea is to create a single team to support, as well as targeting a single vendor to support all the necessary components.

Convergent and hyperconvergent systems

Hyper-converged systems (HCS) bring the very concept of “convergence” to a new level. Convergent systems typically consist of individual components designed to work well together. HCSs are typically modular solutions designed to scale by incorporating additional modules into the system. In fact, they perform "unscaling" of a large storage system at the expense of the software layer of the controller.


Typical architectures of traditional and hyper-convergent data storage and management systems

The more storage devices are added, the greater their overall capacity and performance. Instead of expanding by adding more disks, memory, or processors, simply add new, stand-alone modules containing all the necessary resources. In addition to the simplified architecture, a simplified administration model is used, since HCS are managed through a single interface.

Vertical and horizontal scalability

Scalability is understood as the ability of certain types of systems to continue to function properly when they change their size (or volume) to meet the needs of users. In some contexts, scalability is understood as the ability to satisfy larger or smaller user requests. In the context of storage, they more often talk about meeting the demand for a larger volume.


Schematic representation of vertical and horizontal scalability

Vertical scalability (scale-in) ─ increase the capabilities of existing hardware or software by adding new resources to the physical system ─ for example, computing power to the server to make it faster. In the case of storage systems, this means adding new controllers, disks, and input / output modules to the existing system as needed.

Horizontal scalability (scale out) involves the connection of many autonomous units so that they work as a common and only logical unit. With horizontal scaling, for example, there may be many nodes that are geographically distant.

Scale-out storage architecture

According to the concept of architecture scale out, new device groups can be added to the system almost without limits, as required. Each device (or node) has some data storage capacity. It, in turn, can be dialed by disk devices and have its own computational power, ─ as well as input / output bandwidth (input / output, I / O).

The inclusion of these resources means that not only increases the capacity, but also the performance of working with data. The scale of the system grows as you add nodes to be clustered. For this, x86 servers are often used with a special OS and storage systems connected through an external network.

Users administer the cluster as a single system and manage data in the global namespace or in the distributed file system. Thus, they do not have to worry about the actual physical location of the data.

NetApp Enterprise-Scale HCI: A New Generation of Hyper-Converged Systems

Of course, in some cases, special and unique solutions (bespoke storage systems, networks, servers) remain the best choice. However, other options ─ "as-a-service", converged infrastructure (CI) and software-defined systems (SDS) quickly capture the IT infrastructure market, and this movement will dominate over the next few years.

The CI market is growing very fast as organizations tend to lower the complexity of operation and the rapid deployment of IT. The Hyper Converged Infrastructure (HCI) platforms emerged as a result of their natural development, as organizations are already moving on to building next-generation information centers.

It is also expected that by 2020, 70% of the storage management functions will be automated and they will be integrated into the infrastructure platform. NetApp HCI is the next generation hyperconverged infrastructure and is the first HCI platform designed for enterprise applications.

The first generation of HCI solutions was more suitable for projects of relatively small scale, ─ customers found that they have quite a few architectural constraints. They dealt with many aspects ─ performance, automation, mixed workloads, scaling, configuration flexibility, etc.

This, of course, contradicted the strategy of building the next generation information center, where “agility”, scaling, automation and predictability are mandatory requirements.

Introduction to NetApp HCI

NetApp HCI is the first hyperconvergent infrastructure solution for the enterprise. The solution provides a cloud-like infrastructure (storage resources, as well as computing and network) in an “agile”, scalable, easy-to-manage standard block with four nodes (node).

The solution is based on SolidFire flash storage. Simple centralized management through VMware vCenter Plug-in gives you complete control over the entire infrastructure through an intuitive user interface.

Integration with NetApp ONTAP Select opens up a new range of deployment options for both existing NetApp customers and those who want to upgrade their data center. NetApp HCI addresses the limitations in the current generation of HCI offerings in four key ways:

Guaranteed performance . Specialized platforms and high redundancy today do not seem an acceptable choice. NetApp HCI is a solution that provides "granular" control of each application, which "eliminates noisy neighbors." All applications are deployed on a common platform. At the same time, according to the company, more than 90% of traditional performance problems are eliminated.

Flexibility and scaling . Previous generations of HCI had fixed resources, limited to several node configurations. NetApp HCI now has independent storage and computing resources. As a result, NetApp HCI is well suited for configurations of any scale.

Automated infrastructure . New utility NetApp Deployment Engine (NDE) eliminates most of the manual steps when deploying infrastructure. VMware vCenter Plug-in makes management easy and intuitive. The corresponding API allows integration into top-level management systems, provides backup and disaster recovery. The time for the system to return to a working state after a failure does not exceed 30 minutes

The NetApp Data Fabric . In the early generations of HCI platforms, there was a need to introduce new resource groups into the IT infrastructure. Obviously, this is an ineffective approach. NetApp HCI integrates into the data fabric, NetApp Data Fabric. This increases data mobility, visibility and protection, allowing you to use the full potential of data ─ in a local (on-premise), public or hybrid cloud.

NetApp Data Fabric


Data Fabric deployment model

NetApp HCI is an out-of-the-box solution that is immediately ready to work in a Data Fabric environment. Thus, the user gets access to all his data that are in a public or hybrid cloud.

NetApp Data Fabric is a software-defined data management approach that allows enterprises to use incompatible storage resources and provide continuous streaming management of data between on-premises and cloud-based storage.

The products and services that make up NetApp Data Fabric are designed to give customers freedom. They must quickly and efficiently move data to / from the cloud, if necessary, restore cloud data and move it from the cloud of one provider to the cloud of another.

The foundation of NetApp Data Fabric is the Clustered Data ONTAP storage operating system. As part of Data Fabric, NetApp has developed a special ONTAP for Cloud version of the cloud. It creates a NetApp virtual storage system within an enterprise-wide public cloud environment.

This platform allows you to save data in the same way that it is implemented in the internal systems of NetApp. Continuity allows administrators to move data to where and when it is needed, without requiring any intermediate transformations. In turn, this actually allows you to expand the enterprise data center. at the expense of the public cloud provider.

For the first time, NetApp unveiled the Data Fabric concept in 2014 at its annual Insight conference. According to NetApp, this was a response to the need for its customers to get a unified view of enterprise data, which is stored in many internal and external data centers. In particular, with the Data Fabric, enterprises have gained easy access to their corporate data in the public clouds of the Google Cloud Platform, Amazon Simple Storage Service (S3), Microsoft Azure and IBM SoftLayer.

Enterprise Compliance

One of the biggest problems in any data center is the predictability of the performance that is currently needed. This is particularly true for sprawling applications and their workloads, which can sometimes be very intense.

Each enterprise uses a large number of corporate applications using the same IT infrastructure. Thus, there is always the potential danger that an application will interfere with the work of another.

In particular, for important applications, such as, for example, the virtual desktop infrastructure (Virtual Desktop Infrastructure, VDI) and database applications, the input / output mechanisms are quite different and tend to affect each other. HCI NetApp eliminates unpredictability by providing the necessary performance at every moment.

NetApp HCI is available for small, medium and large storage and computing configurations. The system can be expanded in 1RU increments. As a result, enterprises can very accurately determine the resources they need and do not have unused redundant hardware.

The main task of each IT department ─ to automate all the usual tasks, eliminating the risk of user errors associated with manual operations and free up resources to solve more priority and complex business problems. NetApp Deployment Engine (NDE) eliminates most manual infrastructure deployment operations. At the same time, the vCenter software makes managing in a VMware virtual environment simple and intuitive.

Finally, the API suite allows for seamless integration of the data storage and processing subsystem into higher-level management systems, as well as provides backup and disaster recovery.

NetApp HCI integrates and supports the following technologies :




NetApp HCI Minimal Configuration: Two Structures (Chassis), which together have four modules for flash memory, two compute modules and two empty bays for additional modules


The appearance of a single module NetApp HCI on the reverse side

NetApp HCI Specs



Effective Block Capacity is one of the deduplication parameters, depending on the type of data. See more here .

The system will be available to order no earlier than this fall. For detailed information please contact netapp@muk.ua .

Source: https://habr.com/ru/post/332012/


All Articles