SnapManager is a set of NetApp utilities that allow you to automate the processes of removing the so-called
Application-Consistent Backup (
ACB ) and
Crash-Consistent Snapshots (
CCS ) without stopping applications, using NetApp
FAS series
storage systems, backing them up, backing up, testing copies and archives, cloning , priaplivaniya cloned data to other hosts, recovery and other functions through the
GUI interface by one only the application operator without the involvement of specialists in servers, networks and
storage systems .

SnapManager for Oracle on Windows, Cloning Operation
Why bother to back up data using snapshots and using
storage systems ? The fact is that most modern ways of backing up information imply a lengthy process, resource intensity: load on the host, loading of channels, space occupancy and, as a result, degradation of services. The same applies to the cloning of large amounts of information for the Dev / Test units, increasing the "time gap" between the actual data and the backed up data, which increases the likelihood that the backup may be "not restored". With the use of "hardware" snapshots of NetApp, which do not affect the performance and occupy not 100% backup, but only the "difference" (a kind of incremental backup or better say reverse incremental backup, you do not need to spend time on removing and assembling) , as well as the ability to transfer data for backup and archiving in the form of snapshots, allowing you to more elegantly solve today's high business requirements for such tasks, reducing the time of information transfer and the load on hosts.
The utility consists of several components: a server, on a dedicated host or virtual machine, and agents installed on hosts with a
DB . For the full functioning of
SnapManager , functionality is needed, which is also licensed “by controller”:
FlexClone ,
SnapRestore . In addition to the installed
SnapManager agents, an installed instance of the
SnapDrive utility (licenses included with SnapManager) is required on the host within the
OS with
DB .
SnapDrive helps in creating
CCS by interacting with the
OS , while
SMO interacts with the application to create an
ACB , thus complementing each other.
Scissors between ACB and CCS . Without the purchased licenses of
SnapRestore and
FlexClone , the functionality will not be available, respectively, instant recovery and means of instant cloning and cataloging.
')

SMO integration with Oracle
Licensing with
SnapManager "per
controller " The SnapManager license
also includes other managers for
MS SQL ,
MS Exchange ,
MS Share Point ,
MS Hyper-V ,
VMWare vSphere ,
SAP ,
Lotus Domino and
Citrix Xen Server .

SnapManager for MS SQL Moving Operation
Going along the path of “cost optimization”, refusing to buy a
SnapManager license, you need to understand that management will require additional time and interaction of
DBA with Server Admins, Network Administrators and Storage Admins, in order for
DBA to get the desired
DB manipulation. While with
SnapManager, this can be done by pressing a couple of buttons in the
GUI interface, without engaging different specialists and time consuming.
Many of the functions performed by
SnapManager can be performed using the free
SnapCreator utility, which also allows you to integrate with
DB (as well as a large number of other applications) with applications for removing
consistent ACB snapshots by means of data
storage . But this utility does not perform many other convenient
DB management functions. Such as
DB cloning, in-place recovery, clone
DB clone to another host, automatic performance testing and recovery from the archive, etc.

SnapCreator component interaction scheme
Most of the missing functionality in
SnapCreator can be compensated for using scripting, which is now available with
PowerShell cmdlets in:
DataOntap Toolkit ,
SnapCreator ,
OnCommand Unified Manager (
OCUM ) and many other useful utilities. What undoubtedly takes time to debug for business processes.
SMO integrates with 10GR2, 11g R1 / R2, 12cR1 (12.1.0.1) with
RAC ,
RMAN ,
ASM , Direct
NFS technologies. Everything written below can, as a rule, also be attributed to SnapManager for MS SQL (
SMSQL ) and
SnapCreator .

SnapManager for Oracle on Linux, Backing-up Operation
SMO backs up
only the following data:
- Data files
- Control files
- Archive Redo logs ( Archive Logs )
See TR-3761
NetApp SnapManager 3.3.1 for Oracle , page 12, Table 1).
Redo Logs are not backed up; they can be backed up using
SnapMirror ,
see below .
By means of
SMO backup
Archive Logs is performed, the logs are not overwritten and not restored.
RMAN is used to manage
Redo Logs and
Archive Logs . How
SMO works with Archive Logs .
The recommendations for dividing the space into FlexVols and
LUNs for
SMO , as a rule, coincide with the recommendations of "
Oracle DB on NetApp ".
First of all, you need to turn off automatic snapshots on FlexVol'ums, they will now be initiated by
SMO Server. It is imperative to separate the
Temp Files on the selected FlexVol.
RAW devices with
LVM are not supported. August 2011 | TR-3633
Best Practices for Oracle Databases on NetApp Storage , page 11
Some files of the same type from different
DBs can be grouped and stored in one FlexVol, other files of different types must be separated:
- Each moon is desirable to put in a separate Qtree * . This is convenient in the case of SnapMirror QSM , SnapVault or NDMPCopy replication , so data is archived using Qtree * and, importantly, always restored in Qtree * . Therefore, non- Qtree * ( non-qtree * ) data backed up can always be recovered only in Qtree * . Thus, it is convenient to always store data in Qtree * , so that in case of recovery, do not reconfigure access to data from the new place. Read more
- By placing each LUN in a separate Qtree *, you can assign quotas to its size, generating alerts not on the entire FlexVol, but on a separate Qtree * , when a certain threshold of its fullness is reached. This is convenient for tracking the status of LUN by email without using any additional utilities.
- Temp Files must be separated from all other data, since they change very much during DB operation and, accordingly, snapshots taken from such data occupy valuable storage space on the storage system .
- Temp Files snapshots should be turned off
- Temp Files from all DBs can be added to one FlexVol.
- DB Archive Logs , Redo Logs and Data itself in the case of SAN, each type must be kept on a dedicated LUN 'e. You should not mix different types of DB files in one moon, for example, you should not keep Archive Logs and Redo Log files on one moon.
- LUN 's containing Archive Logs , Redo Log and Data itself , each must be kept on a dedicated FlexVol'ome. Those. for such LUNs , the rule applies: one FlexVol - one LUN (and don't forget about Qtree * ).
- If, for example, DB generates two Archive Logs files and each of them lies on a separate LUN 's, then such LUN ' s, as an exception to the rule, can be stored in one FlexVol ', i.e. have one LUN 's on one FlexVol. The same applies to other DB files, including Data Files and Redo Logs .
- It is desirable to store a copy of the Control Files from each DB along with the corresponding Redo Logs .
- Redo Logs from all instances need to be separated into a separate FlexVol'um.
- Archive Logs from all DB instances need to be separated into a separate FlexVol'um.
- Oracle Cluster Registry ( OCR ) or Voting Disk Files from all instances can be added to one FlexVol'um, each in a separate Qtree * on this FlexVol'ome. Oracle strongly recommends storing OCR and Voting Disks on disk groups that do not store database files .
- Undo Tablespace needs to be stored along with Data Files .

All of these requirements for the separation of
Redo Logs ,
Archive Logs ,
Data Files and
Temp Files follow from the following:
- Temp Files are not needed for backup and recovery, but due to the fact that they change drastically when snapshots are applied to them, they will mercilessly eat space on the storage system , occupying it with useless data.
- If you store the same Temp Files along with other types of DB files that will be backed up by snapshots, they eat up the space (see the previous paragraph) and, as a result, eating all the space can lead to a situation with the departure of LUNs (in this FlexVolum that has ended space) in offline .
- In the case of using SnapMirror VSM replication, snapshots are used, and if we have “mixed in one pile” data from a DB , then look at the two previous points about Temp Files . Plus, everything else on the remote system will constantly be sent to useless data, downloading a communication channel.
- In the case of using SnapMirror QSM or SnapVault replication, the issue of loading a communication channel can be circumvented by placing the data in one FlexVol'ome, whose data is to be replicated, into a separate Qtree * and replicate only them, without loading the communication channel, but snapshots anyway do not get around (see the first two points), since snapshots are removed for the entire FlexVol'um "entirely."
- On the other hand, “logic one LUN - one FlexVol'yum” comes from the possible need to restore not all DBs , but only one or several. To restore, you can use SnapRestore functionality with one of the approaches: SFSR or VBSR . VBSR is faster because it does not require passing through the WAFL structure, since VBSR works at the block level with FlexVol 's all the data inside it will be restored . Thus, if one DB of several DBs or their parts is stored in one FlexVol, one can “accidentally” restore an older version (parts) of another DB at the same time. To prevent this from happening, we therefore have a recommendation: one FlexVol'um is one LUN , for all the data that may need to be restored.
- In the case of file access, such as NFS or Direct NFS , instead of block access, the essence of the above regarding the separation of DB files in different FlexVolums is saved, with the only difference that NFS export is performed at the level of Qtree * or Volume instead of LUN . As well as the possibility of more granular recovery of DB files using SFSR .
At the snapshot creation level, this is called
Consistency Groups , which is supported by
DataOntap 7.2 or higher. But this is only required in the case of
ASM . In the case without
ASM , Oracle itself copes with the consistency of the backup database, which is distributed across different controllers (FlexVol'yum).
In view of our case with the use of
SAN , I want to draw your attention to the following nuances:
In case of using
Thin Provisioning and several
LUNs in one FlexVol'um, you can get a shortage of space for all
LUNs in this FlexVol'um and as a result, all the
LUNs that fell off in it.
DataOntap will take them
offline , so that they are not damaged. To avoid this situation, it is recommended to use
RedHat Enterprise Linux 6.2 OS (or
other modern OS ) or higher with support for
Logical Block Provisioning as defined in the
SCSI SBC-3 standard (what is often called
SCSI Thin Provisioning ), which “explains” the
OS that the
LUN is on in fact, "thin" and the place on it "actually ended", prohibiting the conduct of write operations. Thus, the
OS should stop writing in such a
LUN , and it will not be transferred to
Offline itself and will remain available only for reading by the
OS (another question is how the application will react to this). This functionality also provides the ability to use
Space Reclamation . Thus, modern
OSs now work more adequately in thin planning mode with
LUNs .
In the case of using snapshots (and in
SMO they
will be used ) and storing several
LUN 's in one FlexVol'ome we can again rest on a situation with space, even if
LUN ' s are "thick". Namely: if snapshots do not have enough space in the allocated reserve, they begin to occupy space in the active file system during a
LUN change. In other words, you need to correctly allocate free space for snapshots for
LUN 'a (s). If it is allocated less, then the
LUN 's snapshots will eat up the space in the snapshot reserve, and then from the active file system and see
in the previous paragraph what will happen. The situation is partially solved by the allocation of an empirically selected reserve for snapshots, settings for deleting older snapshots (
snap autodelete ) and automatic increase in FlexVol (
volume autogrow ). But I want to draw your attention to the fact that the release of space will occur, already after the
LUN '(s) go to
Offline , in the FlexVol containing them, in which the space itself has ended. And the presence of a large number of
LUNs in one FlexVol'um increases, so to speak, the
likelihood that one fine moment one of the
LUNs can take and start “growing” not by day but by the hour - not as planned, after eating the whole space is not only in the reserve for snapshots, but also in the active file system FlexVol. From here comes the recommendation: either to have one
LUN per FlexVol, or to store several
LUNs in one FlexVol, but make sure that all these
LUNs are of the same type and grow in the same proportion. The second point implies the mandatory setting of monitoring
OCUM (free software, and indeed a useful thing, monitoring in any situation does not hurt) and monitor what is happening.
OCUM can monitor all the indicators of the
OS StorageDataOntap , which only the latter, in general, can provide. In addition,
OCUM can send alerts to the mail, which accordingly must be read. Visually about snapshots,
LUN 's and
Fractional reserve and why LUN ' s usually only grow can be found here .
In the case of a
SAN with snapshots, the support of the
OS Space Reclamation functions greatly improves the slouching; see the
previous paragraph .
Space Reclamation allows Thin
LUNs to decrease on the
storage side as the host deletes data on it, solving the "
problem of constant growth of Thin LUNs ". Since without
Space Reclamation LUNs, there is always only “growth” in the case of
Thin LUNs and even for “thick”
LUNs, the absence of
Space Reclamation results in “snapshots-overgrowths” that capture the data blocks that have not been needed for a long time. According to this
Space Reclamation mast hev.
Backups created by SMOs can be cataloged in
RMAN , this configuration is optional. This makes it possible to use the functions: block (see an example in Appendix E) and Tablespace-in-time (see an example in Appendix F) of recovery. When cataloging
SMO backups in
RMAN, you need to place them in a
DB other than backup. To register
SMO backups in
RMAN , you need to enable
RMAN-enabled profiles . TR-3761
NetApp SnapManager 3.3.1 for Oracle , page 11. It is recommended to use one thing: either
RMAN or
SMO .
Redo Logs and
Archive Logs are
backed up using
SnapMmirror replication in
synchronous ,
semi-synchronous or
asynchronous mode. Using
SnapMirror involves snapshots (without interacting with
SMO ). All other data is usually replicated asynchronously.
SnapMirror - licensed “per controller”, on both sides - the backup copy (Secondary) and the primary controller (Primary) of NetApp
FAS , which contains the data that must be protected. TR-3455
Database recovery using SnapMirror Async and Sync , Chapter 12, page 17.
Undo Tablespace needs to be stored along with
Data Files in order to back them up.
Undo Tablespace needs to be stored on the same moon along with the
Data Files .
What is the
difference between Archive, Redo and Undo Tablespace .
Some pages may require
NetApp NOW ID to access. If you take a NetApp
storage system to the test, your distributor / integrator will help download them.
* Qtree - Application for replication in NetApp
FAS systems with DataONTAp Cluster-Mode OS (Clustered ONTAP) is no longer required, as
SnapVault and
SnapMirror QSM are now able to replicate and restore data at the
Volume level.
I express my deep gratitude to shane54 , for their help in advising on the work of the Oracle Oracle database and constructive criticism.I ask to send messages on errors in the text to the LAN .Notes and additions on the contrary please in the comments