📜 ⬆️ ⬇️

We write client-server system of Backup-s under * NIX OS

Good day everyone.
As they say, system administrators are divided into those who make backups and those who are not yet doing them.
Actually again about backups.
There was a situation when it was necessary to have on hand always a fresh backup from a large number of
remote hosts. Moreover, the system should be a client-server.
Of course, there is a very large number of free software, which will provide functionality above the roof, but it is either too sophisticated or not quite what you need in the end. In general, it was decided to create its own backup system based on fsbackup.
So, we have a server and client hosts.
OS on all freebsd 8.2, but it is not important, it will work on any version.

We proceed to the installation (only on client hosts, do not touch the server):

[root @ server / usr / ports / sysutils / fsbackup] # make install clean
')
Next, go to the folder with the installed fsbackup and create the file fsbackup.conf with this
content:
$cfg_backup_name = "host1"; $cfg_cache_dir= "/usr/local/fsbackup/cache"; $prog_md5sum = "md5sum -b"; $prog_tar = "/usr/bin/tar"; $prog_ssh = "/usr/bin/ssh"; $prog_rm = "/bin/rm"; $prog_gzip = "/usr/bin/gzip"; $prog_pgp = ""; $cfg_checksum = "timesize"; $cfg_backup_style = "sync"; #  backup-a ,      $cfg_save_old_backup = 1; $cfg_type = "local"; $cfg_local_path = "/home/back/backup_data"; #     . $cfg_time_limit = 0; $cfg_size_limit = 0; $cfg_maximum_archive_size = 0; $cfg_root_path = "/"; $cfg_verbose = 2; $cfg_stopdir_prune=0; 1; __DATA__ #   /usr/local/etc /etc !/usr/local/etc/share # f!\.core$ f!^core$ f!\.o$ f!\.log$ 

You can see in detail what is what in the standard config.
Next you need to fix the file create_backup.sh
add to the beginning of the line:
 #!/bin/sh HASH=`find /usr/home/back/backup_data -name *hash -mtime -1` 

and at the very end of the file line:
 #           if [ -n "${HASH}" ]; then ( tar -cf /usr/home/back/backup_tar/arcamart.tar /usr/home/back/backup_data ) else printf "===>>> not new backup\t" > backup_err.log | err_msg fi exit 0 

Also, do not forget to correct the line config_files = "fsbackup.conf" in this file, specify the name of your config.
And we add this script to kroon (necessarily to kroon root).
Thus, the script will synchronize the tree and, with any changes, create an additional archive.
Now we need to add a user back,
sudo pw useradd back -m -G back

Go to the back home directory and create the necessary folders
mkdir backup_tar # for synchronized tree archive
mkdir .ssh # for key authorization (more on that later)

On the client, the actions are completed, not including the creation of a key for authorization via ssh
Now about setting up the server side:
So we need the server part to remotely connect in turn to each client, check for a new backup and copy itself if successful.
In addition, the server side should be able to do a check on the availability of hosts, and also write logs.
Let's start:
We also create a user back. (a small clarification, on my server side, the home folder has an excellent path from the hosts, for example, on the / usr / home / back host, and on the / usr / local / home / back server, be careful)
[root @ server / usr / local / home / back] mkdir .ssh

Now we create keys for authorization.
[root @ server /usr/local/home/back/.ssh] ssh-keygen

Now, in order for the server to log in to the hosts without entering the password, you just need to copy the file with the public key to the file $ home / .ssh / authorized_keys (that is, on all your client hosts)
Also, do not forget to set permissions and owner on the file authorized_keys
sudo chmod 600 authorized_keys
sudo chown back: back authorized_keys

So, we start to write a script that will do what we need:
To begin, create a file if_routers that will contain a list of your hosts.

Example: ee if_routers:
octet = "192.168"
test = "$ {octet} .0.1"
test2 = "$ {octet} .0.2"
test3 = "$ {octet} .0.3"
test4 = "$ {octet} .0.4"
test5 = "$ {octet} .0.5"

Create a file with any name, let it be sbackup.sh
Content:
 #!/bin/sh s_copy="/usr/local/bin/rsync -azv -e \"ssh -l back -o StrictHostKeyChecking=no\"" DST="/usr/local/home/back/backup_data" #     dir_1="/usr/home/back" #    dir_2="/usr/local/home/back" #   dir_3="/usr/home/back/backup_tar" #     HOME_DIR="/usr/local/home" # #      if [ -f ${dir_2}/if_routers ]; then . ${dir_2}/if_routers else echo "Procedure ${dir_1}/if_routers is not install" > ${dir_2}/backup.log 2>&1 exit 1 fi TIMESTAMP=`date +"%Y-%m-%d %R"` #      ping get_alive () { check_host="/usr/local/sbin/fping -a" eval ${check_host} $1 > /dev/null 2>&1 } err_msg () { printf "DATE: $TIMESTAMP.\n" >> ${dir_2}/backup_err.log } #    , , ,   ,    ,    . get_alive $TEST if [ $? -eq 0 ]; then BACKTEST=`ssh -i /usr/local/home/back/.ssh/id_dsa back@192.168.0.1 "find /usr/home/back/backup_tar -name *.tar -mtime -1 | sed -E 's/.*\///g'"` if [ -n "${BACKTEST}" ]; then (printf "===>>> Start remote backup: ${TIMESTAMP}\n" printf " \n" printf "===================================\n" printf "$ts ===>>>${dir_1}\n" printf "\n" eval ${s_copy} back@${TEST}:${dir_3}/${BACKTEST} ${DST} ) > ${dir_2}/backup.log 2>&1 else printf "===>>> Host: $test is not new backup\t" > ${dir_2}/backup_err.log fi else printf "===>>> Host: $test is down\t" > ${dir_2}/backup_err.log | err_msg exit 1 fi #      ,   get_alive $ exit 0 

Save and create the folder / usr / local / home / back / backup_data
Now you can run the script to test performance. Do not forget to set chmod + x
And also to clarify, in order for the script to connect normally with remote hosts, it needs to be run as user back
sudo su - back ./sbackup.sh
And add to cron.
sudo crontab -u back -e
Is done.
The result we have on the server, always a fresh backup, and without unnecessary copying, that is, only when files change.
To restore, you just need to unzip the desired archive to the host you want to restore.

Thank you all for your attention, I hope I forgot nothing.

Source: https://habr.com/ru/post/133051/


All Articles