📜 ⬆️ ⬇️

Veritas Access 7.3: pros, cons, pitfalls

Software-defined storage approach




In this article, we will review and test the new version of software-defined storage system (Software-Defined Storage, SDS) Veritas Access 7.3, a multipurpose scalable data storage based on traditional x86 servers with file, block and object access. Our main task is to get acquainted with the product, with its functionality and capabilities.

Veritas is synonymous with reliability and expertise in the field of information management, which has a long tradition of leadership in the backup market, and produces solutions for analyzing information and ensuring its high availability. In the Software-Defined Storage Veritas Access solution, we see the future and are confident that over time the product will gain popularity and occupy a key position in the SDS solutions market.
As a platform on which Access was created, a product with a long history of InfoScale (Veritas Storage Foundation), which in times of lack of virtualization was at its peak in Highly Available (HA) solutions, came out. And from Veritas Access's little brother, we expect a continuation of the success story of NA in the form of Software-Defined Storage.
')
One of the advantages of Veritas Access is the price. The product is licensed by the number of active cores (approximately $ 1000/1 CORE in the GPL (global price list)), the cost is completely independent of the number of disks and the total volume of the storage system. Behind the nodes it is wise to use non-nuclear high-performance single-processor servers. Roughly speaking, the Access license for a 4-node cluster will cost $ 20,000– $ 30,000, which is very attractive compared to other SDS solutions, where the cost starts from $ 1,500 per terabyte of raw space in the GPL.

The main advantage that distinguishes Veritas Access from Ceph and those close to it in the spirit of SDS solutions is its “boxing”: Veritas Access is a complete Enterprise-level product. The implementation and support of Access is not fraught with difficulties that only specialists with deep knowledge of nix SDS systems can do. In general, to implement any SDS solution is not a particularly difficult task, the main difficulties arise during operation in the form of adding nodes when they fail, transfer data, update, etc. In the case of Access, all problems are understandable, since the solutions are described for Storage Foundation . And in the case of Ceph you need to have qualified personnel who can provide specialized support. Someone is comfortable working with a ready-made solution in the form of Access'a, someone seems to be a convenient Ceph designer with a more flexible approach, but requiring knowledge. We will not compare Access with other SDS solutions, but try to give a small overview of the new product. Perhaps it will help to make a choice in the question, with which SDS solution you will be more comfortable working.

For transparency of testing, as close as possible to production, we, together with colleagues from one of Veritas distributors - OCS, assembled a five-node stand on physical servers: a two-node cluster in the Data Center of Open Technologies and a three-node in the Data Center OCS. Below are photos and diagrams.

File vs Block vs Object Storage


Veritas Access works on distribution across all protocols, providing file, block and object access. The only difference is that file access is very easy to configure via the WEB interface, and the configuration of the other two is currently poorly described and complicated.



NFS, S3 work on nodes in active-active mode. iSCSI and CIFS in the current release 7.3 works in active-passive. Services connected via NFS, S3 will continue to work if any of the nodes fail, and services connected via CIFS, iSCSI may not survive the loss of the node on which CIFS (Samba) is active. It will take some time until the cluster realizes that the node has failed and will start the CIFS (Samba) service on another node, similarly with iSCSI. Active-active for iSCSI and CIFS is promised in future releases.

iSCSI in 7.3 is presented as tech preview, in the version of Veritas Access 7.3.1, the output of which is promised on December 17, 2017, there will be a full-fledged iSCSI implementation in active-active.

More about modes:


Why SDS?


A bit of theory.

Traditional storage systems do an excellent job with current tasks, ensuring the necessary performance and availability of data. But the cost of storing and managing different generations of storage systems is definitely not the greatest strengths of traditional solutions. Scaling up the Scale-up architecture, where performance increases by replacing individual system components, is not able to restrain the growth of unstructured data in the modern IT world.

In addition to the Scale-up vertical scaling architecture, Scale-out solutions have appeared, where scaling is done by adding new nodes and distributing the load between them. Using this approach, the problem of growing unstructured data is solved quickly and easily.

Disadvantages of traditional storage systems:

• High price
• Short term relevance of storage
• Complicated storage management of different generations and manufacturers
• Scale-up scaling architecture

Software-Defined Storage Scale-out Scale-Out Pros:

• No dependency on the hardware of the system; for nodes you can use any x86 server
• Flexible and reliable solution with simple scaling and redundancy support
• Predictable level of policy-based application service.
• Low cost
• Productivity

Disadvantages of software-defined storage system:

• Lack of a single point for technical support
• Requires more qualified engineers
• The need for N-times the amount of resources (excess)

When trying to expand the traditional storage system after 3-5 years of use, the user is faced with dramatically increasing costs, which force them to buy a new storage system. When separate devices from different manufacturers appear, control over ensuring the quality and reliability of the storage subsystem as a whole is lost, which, as a rule, makes it very difficult to solve business problems.

Software-defined storage allows you to look at the problem of locating growing data on the other hand: its implementation preserves the performance and availability of information without splitting the zoo from various manufacturers and storage generations, which makes working with data more accessible in all plans.

Key Business Benefits







Main technical specifications





ARCHITECTURE


Veritas Access is flexible and easily scaled by increasing nodes. This solution is suitable, first of all, for working with unstructured data, as well as with other storage tasks. The need for software-defined storage systems dictates the transition to multi-purpose, multiprotocol products that combine reliability, high performance and affordable cost.

The Veritas Access cluster consists of connected server nodes. Together they form a unified cluster that shares all major resources. The minimal reference architecture consists of a two-node converged storage solution.



Foundation Veritas Access - Veritas Storage Foundation

The lessons learned during the life cycle of the Veritas Storage Foundation have found their logical use in the modern SDS class product, which is Veritas Access. Basically, the following components find their manifestation in Access:


Veritas Access 7.3 hosts x86 servers running Linux Red Hat release 6.6 - 6.8. Starting with release 7.3.0.1 - RedHat 7.3 - 7.4. Clustering occurs at the expense of VCS technology. Any available disks are used as a storage pool. To add a node to the cluster, you must have at least 4 Ethernet ports in the server. At least 2 ports go to client access and at least 2 ports go to inter-node interconnect. For inter-node interconnect, Veritas recommends using InfiniBand. Minimum requirements for interconnect 1 GB ethernet.

On RAM, the recommendation is 32 GB per production. Veritas Cluster File System (CFS) allows you to cache data at the expense of RAM, therefore, any memory size above 32 GB will not be superfluous, but it is better not to use less. 8 GB is enough for testing, there are no software limitations.

Three-node configuration is best avoided; we recommend using two nodes or four, five, etc. at once. The current limit of one cluster is 20 nodes. The InfoScale cluster supports 60, so 20 nodes is not the limit.

The VCS cluster provides the fault-tolerant operation of data access services via NFS, CIFS, S3, FTP, Oracle Direct NFS and the corresponding infrastructure on Veritas Access nodes, and in certain configurations it is possible to write and read the same data using different protocols.

For organizing file systems of up to 3 PByte, you can use a scalable file system (Scale-out), which allows, in particular, to connect external cloud storage within a single namespace. When building distributed storage systems or disaster recovery, file replication can be configured between different Veritas Access repositories. This technology allows you to asynchronously replicate the file system of the source cluster to a remote cluster with a minimum interval of 15 minutes, with the remote file system remaining open for reading during replication. It supports load balancing between replication links and instantaneous switching of the remote file system to write mode when the source is unavailable and switching the replication service from one node to another in the event of a failure.

The number of parallel replication operations is unlimited. It is important to note that the replication technology uses Veritas CFS / VxFS functionality (File Change Log files and snapshots at the file system level) to quickly identify changes, thus avoiding wasted time on file system scans and significantly increase replication performance.



ARCHITECTURE OF WORK WITH DISKS





Stand scheme together with OCS:




Photo booth in Open Technologies:



Download and install


You can get the Linux distribution of Red Hat Enterprise for a 30-day subscription via this link . After registration of the trial, Red Hat distributions will be available for download, in particular, the necessary releases 6.6–6.8.

You can get the Veritas Access distribution here . To do this, fill out the trial form. During the installation process, the system itself will offer a key for 60 days. You can add a basic license later via the web administration panel in the Settings → Licensing section.

All documentation for version 7.3 is available here .



Installation of Veritas Access can be divided into two stages:

1. Installing and Configuring Red Hat Enterprise Linux
2. Install Veritas Access

Installing Linux Red Hat Enterprise Linux

A typical installation of Linux without any difficulties.




Red Hat Requirements:


With a Red Hat subscription, the Veritas Access installer will pull up the RPM modules it needs. Since the release of Access 7.3, all the necessary modules have been added to the installer repository.

On public interfaces, in addition to IP addresses, you must set the Connection automatically or ONBOOT = yes flag in the interface configuration file.



Installing Veritas Access:

Installation files must be placed on the same node. The installation is started by the command:

# ./installaccess node1_ip node2_ip nodeN_ip 

where node1_ip, node2_ip is any of the ip addresses of the public interface.

During the installation of Veritas Access, there are moments that should be paid attention to:

  1. The Veritas Access installer is designed for an ideal installation. Any step left-right is critical for it and may lead to an installation error. Carefully enter the addresses, masks, names.
  2. The installation must be run locally from any host or through the server management mechanism in the absence of physical access to them, not ssh.
  3. If you failed to install Veritas Access the first time, I recommend reinstalling Red Hat and starting the Veritas Access installation from scratch. In the event of an error, the installer does not allow the system to roll back before the installation begins, which leads to a re-installation with partially running Veritas Access services and may cause problems.

After installing Veritas Access, 2 management consoles will be available:


• web - in our case it is https://172.25.10.250:14161
• ssh - on 172.25.10.250 with login master (default password is master) and root

After installing Access, you must provide the cluster with all the disks for each node that you plan to use in the cluster for data. This process is not automatic, as you may have other plans for some of the cluster disks.
CLISH commands:

 # vxdisk list # vxdisk export diskname1-N # vxdisk scandisks 

For “non-standard” server disks, you must specify the disk model and the address where the S / N disk resides. In my case, the command on SATA disks is:

 # vxddladm addjbod vid=ATA pid="WDC WD1003FBYX-0" serialnum=18/131/16/12 

This is a small minus of SDS systems regarding hard drives not from Hardware Compatibility Lists . Each vendor lays its standards in the disks, seeing a terabyte in a different number of bytes, sectors, placing identifiers in different addresses. And the situation is quite normal. If the SDS storage system does not correctly detect the disks, it needs a little help, in the case of Access, the instruction is here .

And then Access'a is traced: a nonstandard situation has arisen - there is a document with its solution, which is searched for by Google, in addition there is a detailed reference in the vxddladm module itself. There is no need to read bourgeois forums, finish on the knee with an unpredictable result in production. If the problem cannot be solved by yourself, you can always contact technical support.

As a result, each disk must be accessible to each node in the cluster.



After installation, the folder with the installation files will be deleted.

How it all works


In our case, the cluster consists of two nodes:



Each node has physical and virtual IP addresses. Physical IPs are assigned to each node as unique, virtual IPs serve all nodes in the cluster. By physical interfaces, you can check the availability of the site, connect - directly via ssh. If a node is unavailable, its physical interfaces will be unavailable. Each node's virtual IP serves all cluster nodes and is active as long as there is at least one live cluster node. Clients work only with virtual addresses.

Each balloon displays information on how IP is available.



At any time, one node performs the role of a master; in the screenshot, this is the va73_02 node. The master node performs administration tasks, load balancing. The node master role can be transferred or taken by another node in case of loss of availability or the fulfillment of a number of conditions laid down in the logic of the Veritas Access cluster. In case of loss of the internal interconnect, an unpleasant situation can happen in which each node will become a master. The reliability of internal interconnect needs to be given special attention.

Control


Veritas Access has 3 management interfaces: CLISH in root mode and master mode, WEB and REST API.
ssh master mode (login root, master)web

ssh master mode

The most complete console management cluster Veritas Access, access is carried out by ssh on IP management with the master login (default password master). It has an intuitive, simplified set of commands and detailed help.

ssh node mode

The usual Linux management console, accessed via ssh with root login on the IP management.

WEB

WEB console management cluster, access is carried out by root or master login to IP control on the port: 14161, https protocol.

WEB console interprets commands in ssh master mode .



With each release, there is an expansion of management capabilities for WEB.

Initial setup




Just a few words on launching Veritas Access as a storage system.

1. Combining drives in the storage pool



2. Create a file system based on the type of disks, data and the required protection (analogous to RAID)



3. Sharing the file system for the desired protocols. One file system can be shared using several protocols and, for example, access to the same files via both NFS and CIFS, either temporarily or permanently.



4. You are great.



ISCSI external storage and any other storage devices


Veritas Access supports the use of any third-party iSCSI or storage systems for storage. Third-party storage drives must be submitted separately in RAW format. If this is not possible, RAID 5 of 3 disks is used. Data protection is provided by the Veritas Access file system and is presented in the section above. Data protection at the hardware raid level is not necessary, the settings of the Access file system indicate the number of nodes on which your data will be mirrored.

ISCSI-connected volumes can be included in shared pools.
Adding an iSCSI disk looks like this:

1. Turn on iSCSI in the storage section:



2. Add an iSCSI device:



3. Connect an iSCSI disk:





As a test disk iSCSI - win2012



Distribution on iSCSI storage volumes


The configuration process in the current release is a bit different, you can read it in detail in page 437 in the Command Reference Guide (iSCSI target service). For many teams in the current release there is no possibility to see the settings, i.e. It is better to pre-record all parameters in the TXT file.



Important note, in version 7.3 there is a bug, because of which the iSCSI service does not start!



It can be fixed as follows:

On nodes, you need to fix the file /opt/VRTSnas/pysnas/target/target_manager.py
In line 381, correct [1] to [-1].



The process of starting iSCSI distribution looks like this:

 va73.Target> iscsi service start ACCESS Target SUCCESS V-288-0 iSCSI Target service started va73.Target> iscsi target portal add 172.25.10.247 ACCESS Target SUCCESS V-288-0 Portal add successful. va73.Target> iscsi target create iqn.2017-09.com.veritas:target ACCESS Target SUCCESS V-288-0 Target iqn.2017-09.com.veritas:target created successfull va73.Target> iscsi target store add testiscsi iqn.2017-09.com.veritas:target ACCESS Target SUCCESS V-288-0 FS testiscsi is added to iSCSI target iqn.2017-09.com.veritas:target. va73.Target> iscsi lun create lun3 3 iqn.2017-09.com.veritas:target 250g ACCESS Target SUCCESS V-288-0 Lun lun3 created successfully and added to target iqn.2017-09.com.veritas:target va73.Target> iscsi service stop ACCESS Target SUCCESS V-288-0 iSCSI Target service stopped va73.Target> iscsi service start ACCESS Target SUCCESS V-288-0 iSCSI Target service started 


Another important note: without restarting the iSCSI service, the new parameters do not apply!

If something goes wrong, the iSCSI target logs of the work can be viewed in the following ways:

 /opt/VRTSnas/log/iscsi.target.log /opt/VRTSnas/log/api.log /var/log/messages 

On this, in fact, everything. Connect our iSCSI volume to VMware ESXi.





Install a virtual machine on iSCSI volume, no load operation:



Looks like a simulated failure of a SLAVE node under load (copying 10 GB)





Veritas Access Integration with Veritas NetBackup


An important advantage of Veritas Access is its built-in integration with Veritas NetBackup backup software, provided by the default NetBackup Client agent installed on all nodes, which is configured through the Veritas Access CLISH command interface. The following types of backup operations are available:

• full;
• differential incremental;
• cumulative-incremental;
• A snapshot of the VxFS level checkpoint.



Long-term backup storage for NetBackup

Veritas Access's integration with NetBackup allows it to be seen as a cheaper and easier alternative to tape storage for long-term backup storage. In this case, the integration can be performed using third-party free software OpenDedup, which is installed on the NetBackup media server and is connected as a logical device to the NetBackup Storage Unit. OpenDedup is installed on a volume with the specialized OpenDedup SDFS file system, which is located inside the S3 bucket container on the Veritas Access storage. When backing up backups, the NetBackup (Storage Lifecycle Policy) policy controls the write to the logical device NetBackup Storage Unit, and the data is sent in deduplicated form via the S3 protocol to the Veritas Access storage. It should be noted that several media servers can simultaneously write to the same S3 bucket container, which, unlike tape media that can only compress data streams with much lower efficiency, provides global deduplication when storing long-term copies.

Everything is configured quite simply, but since version 8.1 certificates are required.

Storage volumes:



NetBackup interface:



Upgrade Veritas Access to a higher version


We started to study and test Veritas Access version 7.2, but after a week of testing, a new 7.3 was released.

There was a question of an interesting case - update.

Case is up to date, with which the owners of SDS solutions will sooner or later face.

The possibility of updating is provided in the admin panel on the master login.



Veritas Access as a storage system for VMware over NFS


It should be noted ease of use NFS. Its use does not require the introduction and development of complex FC-infrastructure, complex processes of setting up a zoning or trial with iSCSI. Using NFS to access the datastor is also simple because the granularity of storage is equal to the VMDK file, and not entirely to the datastor, as in the case of block protocols. NFS datastore is a common network-mounted globe mounted on a host with virtual machine disk files and their configs. This, in turn, facilitates backup and recovery, since the copy and restore unit is a simple file, a separate virtual disk of a separate virtual machine. You can not discount the fact that using NFS you automatically receive thin provisioning, and deduplication frees you space directly to the datastor level, which makes it available to the administrator and VM users, and not to the storage level, as in the case of using LUN. It also looks extremely attractive in terms of using virtual infrastructure.

Finally, using the NFS datastore, you are not limited to a limit of 2 TB. This is most welcome if, for example, you have to administer a large number of relatively lightly loaded I / O machines. They can all be placed on one big datastor, which is much easier to back up and manage than a dozen separate VMFS LUNs of 2 TB each.

In addition, you can freely not only increase, but also reduce the datastore. This can be a very useful opportunity for a dynamic infrastructure with a large number of heterogeneous VMs, such as cloud provider environments, where VMs are constantly being created and deleted, and this particular datastor for hosting these VMs can not only grow but also shrink.

But there are also disadvantages:

Well, firstly, it is the impossibility to use RDM (Raw-device mapping), which may be necessary, for example, to implement the MS Cluster Service cluster, if you want to use it. You cannot boot from NFS (at least in a simple and usual way, such as boot-from-SAN). The use of NFS is associated with a slight increase in the load on the stack, since a number of operations that are implemented on the host side in the case of a block SAN are supported by a stack in the case of NFS. This is all sorts of blocking, access control, and so on.

VMware connections to Veritas Access via NFS look like this, you see, it's very simple:




To check the fault tolerance and performance, we have allocated a virtual machine on win 2008 R2 on the ball, which is in the mirror on two nodes.

It looks like imitation of failure of the master node without load, at the moment of pulling out the cable latency rose from 0.7 to 7:



This is how the imitation of a master node failure under load looks like (copying 10 GB):




This is the simulation of the failure of a SLAVE node under load (copying 10 GB):




S3 AND OBJECT STORAGE


One of the main drawbacks of NFS, iSCSI, CIFS is the difficulty of using over long distances. The task of forwarding NFS balloons to a neighboring city can be called at least interesting, and there will be no difficulty in doing this for S3. The popularity of object storage is growing, more and more applications support object storages and S3 in particular.

To set up and test S3 is a convenient and free tool - S3 Browser. Setting up S3 on Veritas Access is pretty simple, but distinctive. To access via S3, you need to get the Access Key and Secret Key keychain. Domain users see their keys through the Access WEB interface, the keys for the root user in the current release are obtained by scripts via the CLISH console.



Connected via S3 NetBackup:



REPLICATION


Veritas Access supports synchronous and asynchronous replication. Replication goes in basic functionality without additional licenses and is configured quite simply. Asynchronous replication is carried out on the basis of file systems, synchronous - on the basis of volumes. To test replication performance, we combined our Veritas Access cluster and the OCS distributor cluster with file system-level replication tools. An IPSEC tunnel was established for communication between sites.

Once again, the stand layout:



Setting up replication via a web browser:



After successful authorization appears Replication link:



Checked performance.

We mounted two balls from each cluster into two folders:

 [root@vrts-nbu03 mnt]# showmount -e vrts-access.veritas.lab Export list for vrts-access.veritas.lab: /vx/VMware-Test * /vx/test_rpl_in * [root@vrts-nbu03 mnt]# showmount -e 192.168.3.210 Export list for 192.168.3.210: /vx/NAS * /vx/test_rpl * [root@vrts-nbu03 mnt]# mount -t nfs 192.168.3.210:/vx/test_rpl /mnt/repl_out/ [root@vrts-nbu03 mnt]# mount -t nfs vrts-access.veritas.lab:/vx/test_rpl_in /mnt/repl_in/ [root@vrts-nbu03 mnt]# 


Copy the data to the source folder:



We started the replication task in order not to wait for the timer:
 va73> va73> replication job sync test_rpl_job Please wait... ACCESS replication SUCCESS V-288-0 Replication job sync 'test_rpl_job'. va73> 

Files appeared in the recipient's folder:




FINDINGS


Veritas Access is an interesting Software-Defined Storage solution that it is not a shame to offer a customer. It is a truly affordable, scalable storage system with support for file, block and object access. Access provides the ability to build high-performance and cost-effective storage for unstructured data. Integration capabilities with OpenStack, cloud providers and other Veritas technologies allow you to apply this solution in the following areas:

• Media holdings: storage of photo and video content;
• Public sector: storage of video archives by systems like “safe city”;
• Sports: storage of video archives and other important information by stadiums and other sports facilities;
• Telecommunications: storage of primary data CDR billing (Call Detail Records);
• Financial sector: storage of statements, payment documents, passport scans, etc .;
• Insurance companies: storage of documentation, scans of passports, photos, etc .;
• Medical sector: storage of X-rays, MRI scans, test results, etc .;
• Cloud providers: storage organization for OpenStack.
• Alternative to tape storage systems



Strengths:

• Easy scalability;
• Any x86 server;
• Relatively low cost.

Weak sides:

• In the current release, the product requires increased attention from engineers;
• Weak documentation;
• CIFS, iSCSI work in active-passive mode.

The Veritas Access team regularly releases new releases as scheduled (roadmap), fixing bugs and adding new features. Of the interesting things in the new release of Veritas Access 7.3.1 of December 17, 2017 is expected: a full implementation of iSCSI, erasure coding, up to 32 nodes in a cluster.

If you have questions about work, functionality or configuration - write, call, we are always ready for help and cooperation.

Dmitry Smirnov,
Design engineer,
Open Technologies
Tel: +7 495 787-70-27
dsmirnov@ot.ru

Source: https://habr.com/ru/post/344628/


All Articles