📜 ⬆️ ⬇️

Backup to "Cloud Storage"

Cloud Backup

No serious project can do without performing regular backups. In addition to choosing and setting up the data archiving system, you need to decide where to store the data. Moreover, it is desirable not on the same server where the backup is made, but to be able to store data in some independent, safe place.

Especially for this great "cloud storage". Storage of 1 GB of data will cost only 3 rubles per month.
')
Where to begin?

To start using the “cloud storage” you need to register (full registration takes about 5 minutes). For all new accounts available 10 bonus rubles, with which you can fully test the service. If you are already our client and want to test the service - upon request through the ticket system you will be credited 10 bonus rubles. Now everything is ready for work.

In the control panel, in the section "Cloud Storage" → "Files" is the web interface of the file manager. Create in it a private container for storing backups (access to a private container is possible only after authorization - more securely when storing important data), for example, “backups”. To upload files to the repository, it is better to create an additional user who will have a minimum set of rights - this will allow you to secure the primary user who always has full access rights.

Creating an additional user is in the tab “Cloud storage” → “Access settings”. Enter any name for the user and click "Create" - the user settings dialog will appear.
User settings dialog
In the user settings you need to generate a new password. The option of storing a password is not mandatory, but then, in the future, the password will not be possible to peek in the user's settings, but only generate a new one. And be sure to tick the containers to which the user will have access. Do not forget to save the settings by clicking "Save access changes".

Now everything is ready to configure the backup process on the server.

Simple option

If you have a medium-sized site with a MySQL database that you want to back up regularly, you just need to download two specially prepared scripts and specify the necessary settings.

The first thing you need is the “supload” utility — it allows you to conveniently upload files to the repository. It is installed as follows (presumably the Debian OS is installed on your server):

$ wget https://raw.github.com/selectel/supload/master/supload.sh $ mv supload.sh /usr/local/bin/supload $ chmod +x /usr/local/bin/supload 

Next you need to download and configure the script to perform backup:

 $ wget https://raw.github.com/selectel/storage/master/utils/sbackup.sh $ chmod +x sbackup.sh 

Open the “backup.sh” script using “your favorite text editor” and change the following values:
To check and execute a backup script can be run manually:

 $ ./sbackup.sh 

The result of the execution will be displayed in the console.

The result of the execution will be displayed in the console.

Now you need to adjust the frequency of the backup, you can do this with cron. To do this, simply move the script to a special directory:

 $ mv backup.sh /etc/cron.daily/50_sbackup 

After that, cron will automatically run the archive script once a day.

How to recover data?

If it so happens that you need to get data from a backup, then you can do it in the following way.

Most likely you downloaded the backup files into a private container, from there the file itself can be easily downloaded using the file manager's web interface. But as a rule, it is more convenient to download the file directly to the server or provide access to it to another person. This can be done with the help of special links - this will allow you to safely download the file on the server or transfer it to someone else without changing the type of container to public.

To do this, in the web interface of the file manager, find the file you need, to the right of it, click on the operations icon (it looks like a gear) and select the item “Open access”:

Item "Share"
For a link, you can limit the time of action, the number of file downloads and, if desired, also set a password:

Link functionality

After creating the link, you will receive a link by clicking on which you can download the file. The link itself will be stored in the “links” container, in the same place you can once again look at the download link.

After uploading the backup file to the server, you need to decompress the data:

 $ mkdir backup_files #       backup_files $ tar xvf backupname_2013-01-26_08h40m.tar.bz2 -C backup_files/ #   (       ) $ bzcat mysql_backupname_ALL_2013-01-26_08h40m.bz2 | mysql 

More complex backup scripts

The “sbackup” script has a rather limited functionality and in some cases it may not be enough. But, it can always be modified to fit your needs.

Often, servers already use some kind of automated backup system, some CMS or management systems allow you to create and customize data archiving. You can use these “ready-made” systems and “teach” them to upload archived data to the cloud storage. If the system provides for the execution of external scripts after archiving is completed, this can be used to perform data loading using the “supload” utility.

Using "supload"

Supload ( GitHub ) is a specially designed utility to simplify file uploading to the selector repository. It is written in bash and uses “standard” utilities that are installed on almost any basic Linux system, so it will be enough to download the script and it will immediately work.

Utility features:

Once again about the installation:

 $ wget https://raw.github.com/selectel/supload/master/supload.sh $ mv supload.sh /usr/local/bin/supload $ chmod +x /usr/local/bin/supload 

Loading one local file "my.doc" into the storage "files" container (the container must be created in advance):

 $ supload -u USERNAME -k USERKEY files my.doc 

You can also upload to the desired folder inside the container:

 $ supload -u USERNAME -k USERKEY files/docs/ my.doc 

In this case, before downloading the file, its checksum (MD5) is calculated and the download is considered successful only if the checksums match.

To download all files from a specific folder, use the -r option:

 $ supload -u USRNAME -k USERKEY -r files local/docs/ 

For each uploaded file, checksum checks will also be performed.

Verification of checksums gives another additional opportunity - if you run the utility again, that is, the data is already in the repository and the checksums match, the file upload is skipped. It is allowed to upload only new or changed files.

The storage supports automatic deletion of files, “supload” allows you to specify how long to store the file:

 $ supload -u USERNAME -k USERKEY -d 7d files my.doc 

The -d option indicates after what time in minutes (m), hours (h) or days (d) the repository will automatically delete the file. This option also works when recursively downloading files. If the file has already been uploaded, then restarting the command does not change the previously established (or not specified at all) file retention period.

This property can be interesting to use - let's say your archiving system puts the backup files into the / var / backups / site / folder and controls the deletion of files after a certain period of time. You can configure the periodic launch of “supload” to load all files with a limited storage time, for example:

 $ supload -u USERNAME -k USERKEY -d 31d -r backups /var/backups/sites 

Then each new downloaded backup file will be stored in the storage for 31 days, and for the previously loaded files, their storage period will gradually decrease and they will be automatically deleted as well after 31 days from the moment of their loading. In order for such a scheme to work correctly, it is necessary for your archiving system to delete files by less than that specified in “supload”, otherwise old files may be downloaded again.

The “supload” utility is well suited for both manual file uploading and for use in archive system scripts. The only limitation is the maximum size of a single downloadable file is 5 GB.

Download large files

To upload files larger than 5 GB in storage, you need to use a special download method - download by segments. In this case, the file is divided into virtual parts and loaded separately. Downloading such a file back occurs “transparently” as a single whole file, “gluing” the segments unnoticed on the storage side.

Python-swiftclient is one of the utilities that allows you to download files by segment. You can download as follows:

 $ swift upload container -S 1073741824 large_file 

In this case, the file will be “on the fly” divided into segments of 1 GB each and loaded into the storage. The -S option indicates the size of one segment in bytes, the maximum segment size of 5 GB (5368709120 bytes).

Link to this post in our blog.

Source: https://habr.com/ru/post/168249/


All Articles