Starting with version 8.3.1 in the software (firmware) Data ONTAP, a new feature called
SnapMirror for SVM was presented. SnapMirror for SVM is the ability to replicate all data on the storage system and all settings or only a part of the data or settings to a spare area (Disaster Recovery).
In order to be able to run all your services on a backup system, it is logical that the main and backup systems are more or less the same in performance. If, however, the system is weaker at the backup site, it is worth asking in advance which of the most critical services will need to be started and which ones will not be running. It is possible to replicate as all SVM with all its volums, or to exclude from the replica a part of volums and network interfaces (starting with ONTAP 9).
There are two modes of SnapMirror for SVM: Identity Preserve and Identity Discard.

')
What is NetApp SVM?
SVM is something like virtual machines on servers, but designed for storage. Let this analogy not mislead you, on the storage system you will not be able to run your virtual machines with Windows, Linux and so on. SVM is a virtual machine (very often the only one, but you can deploy a lot if you wish) in the storage cluster. Storage cluster with ONTAP software (firmware) can consist of one or more nodes, currently a maximum of 24 nodes in a cluster. Each SVM is "logically", it is one entity that is visible to the administrator as one unit. SVM lives immediately on the whole cluster, but physically, in fact, “under the hood” of the storage system is a set of virtual machines, one machine on each node of the entire cluster, which are combined in a special way and presented to the administrator as a single point of control.
The meaning of SVM in the ONTAP cluster, on the one hand, is that it is visible to the storage manager as one management point of the entire cluster, and, if necessary, migrates objects (volyumy, moons, network addresses), along the nodes of the cluster, without any special configuration (SVM takes all the care of the migration on itself).
On the other hand, the meaning of SVM is also to beautifully, in a sophisticated way, to deceive hosts, so that 2, 4.8, or even 24 cluster nodes are visible to the end host as a single device, and the migration of data or network addresses from one node to the other, for the hosts, was transparent.
All of these SVM features in a cluster are called "Single Namespace".
Identity Preserve for NAS (IP)
Identity Preserve mode is designed for the NAS, which stores network settings and addresses, can be used with several schemes:
- When there is a stretched L2 domain between sites
- When there is L3 connectivity between sites (routing)
- When there is no need to replicate network IP address settings, then they will need to be manually configured after switching to a backup site.

Identity Preserve: L2 domain for NAS
L2 domain between sites requires appropriate network equipment and channels. Imagine two sites with two storage systems that replicate data from the first to the second, in case of an accident, the administrator performs a switch to a backup site, and the same network IP addresses that were on the main site, and in general everything that was configured on the first storage system also moves. In turn, when the moved servers (with the old storage connection settings) using the first (main) storage previously, go up to the second (spare, DR) site, they see the same addresses where they were connected before, but in fact this is a backup site , and they simply do not know about it, and connect to the second storage system, as if to the main storage system, which greatly simplifies and speeds up the process of switching to the backup platform.
The large companies can afford the required equipment and channels; on the other hand, this mode significantly speeds up switching to a backup site .
Identity Preserve: L3 domain for NAS
In the absence of a stretched L2 network between the primary and backup sites, you will need several different IP subnets and routing. If the data were available at the old addresses, on the second (backup, DR) site, then applications would not be able to access them, because other sites on the backup site. In this case, the Identity Preserve function also comes to the rescue with the preservation of network addresses — after all, you can pre-specify new network IP addresses for the DR site (which will rise on the DR site at the moment of switching to the secondary storage system) for which data will be available on the backup storage system. If you just migrate the hosts, then their network addresses will also need to be reconfigured: manually or using scripts on the backup site, so that they can see their data, connecting from their new IP addresses, again, to the new IP addresses of the storage system.
This mode of operation will be of more interest to small companies that can afford a longer switching time in the event of a catastrophe or an accident without spending money on expensive equipment and channels.
Identity Discard for SAN or NAS
Sometimes there is a need to completely abandon the old settings when switching to a backup site, for example, to abandon the NFS export settings, the CIFS server settings, DNS, etc. It may also be necessary to provide the ability to read data at a remote site or when there is a need to replicate the moon for the SAN environment. Identity Discard (Identity Preserve = False) comes to the rescue in all such situations.
As in the case of the Identity Preserve L3 configuration, at the remote site, after switching, you will need to reconfigure the network IP or FC addresses (and other settings that were not replicated, according to the Identity Discard mode), which will be able to access the old data on the secondary storage system. If you just migrate the hosts, then their network addresses will also need to be reconfigured: manually or using scripts on the backup site, so that they can see their data. This mode of operation will be more
interesting for customers who need to be able to replicate LUNs for SAN infrastructure or for those who want to read data on a backup site (for example, cataloging). Also, this mode will be interesting to check the backup for the opportunity to restore it, as well as for a variety of testers and developers .

SnapMirror Toolkit
Clustered Data ONTAP SnapMirror Toolkit is a free set of Perl scripts that will speed up and streamline the process of automating validation, preparation, configuration, initialization, updating, switching to a backup platform and back Replication SnapMirror.
Download SnapMirror Toolkit .
NetApp-PowerShell Commandlets
For Windows machines,
NetApp PowerShell Toolkit is available, which allows you to create NetApp management scripts.
Workflow Automation
Workflow Automation is a free graphical utility that allows you to create sets or bundles of tasks to automate ONTAP management processes. For example, through it, you can configure the creation of new permissions for the file sphere or iGroup, add the replicated views and new initiating hosts from the DR site, raise new LIF interfaces and much more (create a Broadcast Domain, create Failover Groups, Firewall Policies, Routes, DNS, etc.). All this can be automated so that it will be done immediately, after the replication gap has been completed, with almost one mouse click. Workflow Automation will be more useful for Identity Preserve L3 and Identity Discard modes, since in these modes, after switching to a backup site, you will need to perform additional configuration of the storage system and servers. Workflow Automation will also be extremely useful for testers and developers who can clone huge data sets with data storage systems in seconds and automate their preparation for their work.
Snap-to-cloud
Data replication can be performed both on the physical FAS platform and on their virtual brothers:
Data ONTAP Edge ,
ONTAP Select or
Cloud ONTAP to a public cloud. The last option was called Snap-to-Cloud. To be more precise, Snap-to-Cloud is a set (bundle) of certain models of FAS platforms + Cloud ONTAP with installed replication licenses for backup to the cloud.

Disaster Recovery is not High Avalability
To
provide 0 switching time , you will need even higher costs for the channels, more and more expensive network equipment. Therefore, it is often more appropriate to use DR, rather than HA. In the case of DR, idle time when switching to a backup site is unavoidable, RPO and RTO may be quite small, but it does not equal 0, as is the case with HA.
Exclude Volyum from DR Replica
To exclude volium / moons from the replica DR of the entire SVM, you need to do the following at the source:
source_cluster::> volume modify -vserver vs1 -volume test_vol1 -vserver-dr-protection unprotected
The second application of SnapMirror can be Test / Dev
Using a spare site as a
development environment using thin cloning (data protection of the VOLUME) reduces the load on the main storage system, while testers and developers may have newer information (compared to the traditional FullBackup approach because snapshots are removed and replicated much faster and as a rule, because of this, more often) for their work, replicated from the main storage system. For thin cloning, you need a
FlexClone license at the appropriate site.
NetApp backshots do
not wag at all
on the performance of the entire system as architecturally designed. Due to this, snapshots are conveniently used for replication - transfer only changes. This is more effective compared to Full Backup / Restore, since during the backup operations only these changes are read / written, and not every time everything is new. It is also more efficient to use hardware replication using storage systems because the host's CPU and its network ports are not used during backup. This allows for more frequent backups, and the ability to instantly take snapshots will remove them right in the middle of the day.
I want to note that replication based on netapovskikh snepshots, even if it is much less than the traditional backup scheme or traditional CoW snepshots, loads the storage system, but anyway it will additionally replicate it. First, for replication, additional operations of reading the actual changes are generated, generating additional tasks of reading from the disk subsystem. Secondly, read operations go through the storage CPU. Magic does not occur: we can significantly reduce and optimize the load from the backup process, but you cannot nil it completely.
findings
SnapMirror technology finely replicates and restores data using snapshots. This allows you to reduce the load on the network and disk subsystem compared to Full Backup / Restore and perform replicas even in the middle of the working day, thereby increasing the number of backups and thus significantly reducing the backup window. The SnapMirror for SVM functionality provides a convenient way to create a DR recovery scheme for the entire storage system. In addition to DR, the second site can be used for Test / Dev, removing these tasks from the main storage system.
Translation to English:
ONTAP: SnapMirror for SVMThis may contain links to Habra articles that will be published later .
I ask to send messages on errors in the text to the LAN .
Comments, additions and questions on the article on the contrary, please in the comments .