📜 ⬆️ ⬇️

Migrating HipChat Server data from self-hosted VM to Amazon EC2

image


The company I work for now has been using the HipChat Server for internal communications for a team of developers and admins for about a year now. Moreover, this is a self-hosted KVM virtual machine with a qcow2 disk image. And so, recently there was a need to transport internal services on AWS, I began to look for ways. There were actually a few options:
- Convert qcow2> raw> ami.
- The official approach is to export chats, users, rooms and files (without passwords, integrations and api keys).
- the third option, which I will describe below.

I decided to postpone the first two options for later. I did not want to consider the official method, because I really did not want to have problems with recovering passwords and keys, although it would not take a lot of time in my case. And the option of converting to an AMI image is simply less interesting.

The source data is the HipChat 1.4.1 server at the source and at the receiver. The source is the kvm / qcow2 virtual machine. The receiver is clean (without users, not counting me) EC2-instance HipChat Server but with the initial setup and trial license. The configuration is the same, the size of the disks is the same (1 CPU, 2Gb RAM, 50Gb HDD).
')
# VARS SOURCE=hipchat.example.com DEST=new_hipchat.example.com 


So, I decided to dig a little in the guts of the server and figure out which files and settings of which services need to be transferred. A quick review revealed that the following technology stack is being used:
- nginx,
- php5-fpm,
- mysql,
- elasticsearch,
- redis,
- gearman-job-server,
- memcached,
- and several related services.
Plus a number of python, ruby, chef-solo scripts.
Familiar?
I did not dig far, I just needed to figure out what needs to be transferred and how.

It was possible to dump the database (mysql, elasticsearch, redis), but I decided that in any case I would completely stop all services on the source and on the receiver, so I’ll just copy the files from the source to the receiver.
Before you begin, you need (yes, you would need) to warn the command that n-hours will be unavailable for the hippat (it is more logical to do this during off-hours), then stop all services (both on the source and on the receiver) from the list below:

 # ON $DEST AND $SOURCE /etc/init.d/cron stop /etc/init.d/monit stop /etc/init.d/nginx stop /etc/init.d/php5-fpm stop /etc/init.d/hipchat stop /etc/init.d/integrations-0 stop /etc/init.d/tetra-proxy stop /etc/init.d/tetra-proxy-0 stop /etc/init.d/tetra-app-0 stop /etc/init.d/barb-0 stop /etc/init.d/coral-0 stop /etc/init.d/crowd stop /etc/init.d/cumulus stop /etc/init.d/curler stop /etc/init.d/elasticsearch stop /etc/init.d/gearman-job-server stop /etc/init.d/memcached stop /etc/init.d/redisserver stop /etc/init.d/mysql stop 

About crones, I did not immediately guess, it turned out to be a task that periodically checks the monit, and that in turn checks the other services (in general, everything is logical, well done). In fact, not all services from this list work - tetra-proxy, for example, and some other one, did not remember.

Scrolling through the contents of the directories, I collected a list of the necessary (not the first time, something empirically):

 # Copy all files from $SOURCE to $DEST rsync -avz /etc/nginx/ $DEST:/etc/nginx/ rsync -avz /etc/mariadb_grants $DEST:/etc/mariadb_grants rsync -avz /etc/chef/ $DEST:/etc/chef/ rsync -avz /etc/crowd/ $DEST:/etc/crowd/ rsync -avz /etc/mysql/debian.cnf $DEST:/etc/mysql/debian.cnf rsync -avz --del /chat_history/ $DEST:/chat_history/ rsync -avz --del /data_bags/ $DEST:/data_bags/ rsync -avz --exclude='file_store/archive/pool' /file_store/ $DEST:/file_store/ rsync -avz --del /hipchat/ $DEST:/hipchat/ rsync -avz --del /hipchat-scm/ $DEST:/hipchat-scm/ rsync -avz --del --exclude='home/admin/.ssh' /home/ $DEST:/home/ rsync -avz --del /ops/ $DEST:/ops/ rsync -avz --del /opt/ $DEST:/opt/ rsync -avz --del /var/lib/mysql/ $DEST:/var/lib/mysql/ rsync -avz --del /var/lib/redis/ $DEST:/var/lib/redis/ rsync -avz --del /var/lib/cloud/ $DEST:/var/lib/cloud/ 

It is important not to overwrite .ssh / authorized_keys, so that you do not have to suffer with the restoration of access to the instance!

To start copying files from the source to the receiver, you must first configure the access by keys and ssh-server. The default config has whitelist and prohibits root user authorization, so I had to temporarily make the following changes:

 editor /etc/ssh/sshd_config ... PermitRootLogin without-password ... # Whitelist to HipChat admin DenyUsers ubuntu hipchat AllowUsers root admin nessus 

and restart ssh:
 /etc/init.d/ssh restart 

After copying is completed and hipchat starts, these changes will be automatically rolled back.

When everything is successfully copied, you can try to start all services back to the receiver (the source is better not to touch until you make sure that everything is fine), but it is better to reboot, or rather I had to do it to make it work.

 # ON DEST /etc/init.d/memcached start /etc/init.d/redisserver start /etc/init.d/mysql start /etc/init.d/cron start /etc/init.d/monit start /etc/init.d/nginx start /etc/init.d/php5-fpm start /etc/init.d/hipchat start /etc/init.d/integrations-0 start /etc/init.d/tetra-proxy start /etc/init.d/tetra-proxy-0 start /etc/init.d/tetra-app-0 start /etc/init.d/barb-0 start /etc/init.d/coral-0 start /etc/init.d/crowd start /etc/init.d/cumulus start /etc/init.d/curler start /etc/init.d/elasticsearch start /etc/init.d/gearman-job-server start 

Another important note is that the hipchat is tied to dns records and records in the hosts file, and the latter will not work (it is overwritten at the start) so that “everything took off” the server should rezolvit itself by the hostname which is registered in dns and which you specified during the initial configuration (such as hipchat.example.com).

 cat /etc/hosts 127.0.0.1 localhost localhost.localdom # Network nodes 192.168.0.10 hipchat.example.com #     ,    chef      # Services 192.168.0.10 graphite.hipchat.com 192.168.0.10 mysql.hipchat.com 192.168.0.10 redis-master.hipchat.com 192.168.0.10 redis-slave.hipchat.com # IPv6 ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts 

To check the performance of the receiver will have to change the dns-record for your hipchat server. Or in the case of AWS, you can use the internal DNS server by placing a private-zone with your domain in Route 53 and specifying the private-ip of the instance in it.
And only after that do service hipchat restart (or better reboot, to be sure that when the system is restarted, everything will be ok), and everything will work.

I deliberately tried not to interfere with the logic that exists in the server, because all my changes would disappear with the next update and could possibly lead to a loss of serviceability.

As a result, my experiment was a success, this morning the guys did not even notice what happened, all without any problems continued to use the command chat.

I do not recommend using this article as a guide to action if you have started such a transfer. The article may be errors and inaccuracies. Therefore, do everything at your own peril and risk. Everything must be done carefully and clearly understand what you are doing and how it can turn around.

Make backups and check them.

All long uptime and good mood.

Source: https://habr.com/ru/post/302460/


All Articles