Now, when we have a SQL Server created on the Cloud with the SQL Server installed on it and are able from the client’s side to connect with it, as with a local SQL Server, it remains to fill it with data. Suppose, in the hybrid scenario, part of the database is planned to be transferred to Azure SQL VM. In this article, the scenario will be considered, when a database is isolated as a file (or several files) by creating a backup copy, detach,
data-tier application , etc., the file is delivered to the Azure SQL VM and converted back to base by restoring from backup, attach,
deploy / import data-tier application , etc. The first and last action does not cause questions for the DBA. It remains to understand how best to deliver the alienated database file (.bak, .mdf, .bacpac, ...) to the cloud virtual server with SQL Server.
For example, let's transfer your favorite AdventureWorks database as a backup copy:
backup database AdventureWorks2012 to disk = 'c:\Temp\AdventureWorks2012.bak' with init, compression, stats = 10
Script 1
Files of small sizes, like this one, can, without further ado, be transferred with ordinary Copy / Paste to a remote desktop of a SQL Server virtual machine. Another thing that comes to mind is to make a shared folder on a virtual machine and copy it using advanced copying tools with the possibility of parallelization, correction and renewal in case of failures, and also transfer the file via FTP. These methods are obvious. In this post, we will use a different method: transfer the backup file from the local machine to Azure Storage as a blob and download it from there to the cloud virtual machine. We already have one Storage Account, created automatically when
creating a virtual machine , in which there was automatically a container called vhds, in which a virtual disk of our virtual machine is stored as a blob. For the purity of the experiment, we will create a new Storage Account called tststorage in the same data center as the cloud virtual machine to reduce overhead.
Inside Azure Storage, data can be stored as blobs or tables — see Azure Data Management and Business Analytics in the Windows Azure documentation. Tables are not tables in a strict relational sense. These are just loosely structured sets of key-value pairs, similar to what was once called SQL Data Services - see
Introduction to SQL Azure . Compared to SDS, current tables can be partitioned by key. Different partitions are stored on different machines in the Cloud, which achieves horizontal scaling, as if
sharding in the case of the SQL Azure Database. Blobs are block and page. The structure of block blobs is optimized for podkovoy access, paged for random read / write. The page structure allows you to write a range of bytes to the blob. The difference between them is explained in detail, for example, here -
blogs.msdn.com/b/windowsazurestorage/archive/2010/04/11/using-windows-azure-page-blobs-and-how-to-efficiently-upload-and- download-page-blobs.aspx . Virtual disks are stored as page blobs. Storage of blobs is performed inside containers that are created within the Storage Account. Let's create in container tststorage container1 for storage AdventureWorks2012.bak.
The public container allows you to see any blobs it contains. A public blob allows anyone to access any blob, but the content of the container is not available. Finally, a private container means that you will need to specify the Storage Account key to access the blob. You can subsequently change the level of access to the container using the Edit Container button.
The backup of the database made in Script 1 will be uploaded to Azure Storage as a block blob for simplicity. For blob operations in the Cloud (as well as on tables and queues), you can use REST, which allows you to work directly via the Internet (HTTP Request / Response), involving a wide range of development tools. The REST API for working with blobs is described here -
msdn.microsoft.com/en-us/library/dd135733.aspx . So you can see which blobs are in a public container:
tststorage.blob.core.windows.net/container1?restype=container&comp=listContainer1 is now empty. To load AdventureWorks2012.bak into it, you need to use the PUT method:
')
using System;
using System.Net;
using System.IO;
using System.Security.Cryptography;
using System.Text;
using System.Globalization;
class program
{
static void Main (string [] args)
{
string fileFullName = @ "c: \ Temp \ AdventureWorks2012.bak"; //@ roomc:\Temp\aaa.txt ";
string storageAccount = "tststorage";
string containerName = "container1";
string accessKey = "xws7rilyLjqdw8t75EHZbsIjbtwYDvpZw790lda0L1PgzEqKHxGNIDdCdQlPEvW5LdGWK / qOZFTs5xE4P93A5A ==";
HttpWebRequest req = (HttpWebRequest) WebRequest.Create (String.Format ("https: // {0} .blob.core.windows.net / {1} / {2}", storageAccount, containerName, Path.GetFileName (fileFullName) ))
FileStream fs = File.OpenRead (fileFullName);
byte [] fileContent = new byte [fs.Length];
fs.Read (fileContent, 0, fileContent.Length);
fs.Close ();
req.Method = "PUT";
req.ContentLength = fileContent.Length;
req.Headers.Add ("x-ms-blob-type", "BlockBlob");
req.Headers.Add ("x-ms-date", DateTime.UtcNow.ToString ("R", CultureInfo.InvariantCulture));
req.Headers.Add ("x-ms-version", "2011-08-18");
string canonicalizedString = BuildCanonicalizedString (req, String.Format ("/ {0} / {1} / {2}", storageAccount, containerName, Path.GetFileName (fileFullName)));
req.Headers ["Authorization"] = CreateAuthorizationHeader (canonicalizedString, storageAccount, accessKey);
req.Timeout = 100 * 60 * 1000;
Stream s = req.GetRequestStream ();
s.Write (fileContent, 0, fileContent.Length);
DateTime dt = DateTime.Now;
req.GetResponse ();
System.Diagnostics.Debug.WriteLine (DateTime.Now - dt);
}
static string CreateAuthorizationHeader (string canonicalizedString, string storageAccount, string accessKey)
{
HMACSHA256 hmacSha256 = new HMACSHA256 (Convert.FromBase64String (accessKey));
byte [] dataToHMAC = Encoding.UTF8.GetBytes (canonicalizedString);
string signature = Convert.ToBase64String (hmacSha256.ComputeHash (dataToHMAC));
return "SharedKey" + storageAccount + ":" + signature;
}
static string BuildCanonicalizedString (HttpWebRequest req, string canonicalizedResource)
{
StringBuilder sb = new StringBuilder ();
sb.Append (req.Method + "\ n \ n \ n");
sb.Append (String.Format ("{0} \ n \ n \ n \ n \ n \ n \ n \ n \ n", req.ContentLength));
sb.Append ("x-ms-blob-type:" + req.Headers ["x-ms-blob-type"] + '\ n');
sb.Append ("x-ms-date:" + req.Headers ["x-ms-date"] + '\ n');
sb.Append ("x-ms-version:" + req.Headers ["x-ms-version"] + '\ n');
sb.Append (canonicalizedResource);
return sb.ToString ();
}
}
Script 2
In this code, everything is quite obvious except for perhaps one moment. Despite the fact that container1 was created as a public container, the blob entry requires authorization. Who and what operations can be performed on blobs and containers depending on the access level specified is described here -
msdn.microsoft.com/en-us/library/dd179354.aspx . Regardless of the access level, the owner has the right to write. To log in as the owner in the HTTP Request, you must set the Authorization header. The string written to this header, in accordance with the requirements of the authentication schemes, contains a signature that is the Hash-based Message Authentication Code (HMAC) of the canonized UTF-8 encoded string, where the hash is calculated using the SHA256 algorithm based on the access key. The canonized string is made up of the REST access method, the size of the file being uploaded, the type of blob (x-ms-blob-type = block or page) of the date / time of the HTTP request in UTC format (x-ms-date), the date of the Azure blob version, Serving this HTTP request (x-ms-version), etc. There is no need to shine with high programmer art, only painstaking and attentiveness is needed, since the slightest inaccuracy in forming a canonized string inexorably results in an HTTP 403 Forbidden error.
Access keys (primary and backup) are formed during the creation of the Storage Account, they can be viewed in the container properties (Manage Keys). Any of them can be set as an accessKey for creating a digital signature during authorization - HMACSHA256 hmacSha256 = new HMACSHA256 (Convert.FromBase64String (accessKey));
For more granular rights management, you can use a shared access signature (Shared Access Signature). A public access signature allows you to create a policy that allows you to perform a specific operation, for example, writing within a specific container within the allotted time period. The person who is awarded a signature will be able to act within the framework of this policy. Another signature, for example, may authorize to read from another container for another period.
Other comments.
• If a blob with the same name exists in the container, it is silently ground.
• Container name is case sensitive.
• The loading time obviously depends on the speed of the grid. For example, from work, this 45-meg backup backed up with a whistle in 00:01:07. From the house it turned out many times slower.
In this demo, the backup had a rather “childish” size. Block blobs are limited to 200 GB. A block blob of less than 64 MB can be loaded with a single write operation, as we saw in the example Script 2. Otherwise, it should be broken into pieces and downloaded block by block using the Put Block / Put Block List methods. Pour blobs should be used when uploading large files to Azure Storage. Page blob consists of 512-byte pages, its maximum size is 1 TB. An example of writing / reading a page blob page range is given here -
blogs.msdn.com/b/windowsazurestorage/archive/2010/04/11/using-windows-azure-page-blobs-and-how-to-efficiently-upload- and-download-page-blobs.aspx .