Good day. Historically, out of the Postgresql box, the 8th branch (and the bright future with the 9th in the stable has not yet arrived) does not have a replication system. Therefore, for these purposes, written external tool - Slony1.
How it works
It works quite simply: the slon daemon connects to the Postgresql server, clings to the triggers, and waits for any event that changes the contents of the database to occur. There is also a slonik - a configuration utility that uses configs in STDIN and several other control utilities for various purposes.
As part of replication, when it comes to Postgresql, the following terms are commonly used:
- A node (node) is a server-base pair. That is, for example, if you have several replication databases on one server, this is not one node, but several.
- Cluster (cluster) - a group of nodes. Everything is simple, I suppose: master and slave (s).
- Tables (tables) - everything is still easier here. :)
- Sequences (sequences) - counters for tables.
- Sets (sets) - a set of tables and counters. For each base, a separate set is created.
After a successful configuration, Slony creates the _cluster scheme in the database to be replicated (where “cluster” is the name of the cluster specified in the config), and there are several of its tables in it. The appointment of the tables can be found off. documentation
, well, or using psql. So if you suddenly encounter difficulties during the setup process, and after restarting, you received a message like “this node / cluster / table is already in the list” - perhaps the circuit drops you.
Also, you need to know that only those tables that have a primary key are subject to replication.
The main unpleasant feature of elephants, for me was the fact that it does not know how to make a replica of all the tables in the database - be sure to list one after another. When you have> 50 tables in your database, writing configs becomes extremely tedious. For these purposes, I wrote a script, laid out below.
The most inquisitive of you have probably seen such mana and may believe that this post is another take. No, it is not. All the mana I saw suggested feeding the config directly to slonik via STDIN. With this approach, the system will work, but (at least in the case of Debian / Ubuntu) there will be two problems:
- The init script will not work.
- Auxiliary utilities of elephants will not work.
The correct approach is to store the perl-ovid config in /etc/slony1/slon_tools.conf (for example, you can find in ls /usr/share/doc/slony1-bin/examples/slon_tools.conf-sample.gz) , and in / etc / default / slony1, the node numbers that live on this machine will be run by the init script. As I said, the first config is a perl-script and is included in the slon_start script. In general - the configuration of the hands in this post is not considered - only tells where you can find examples.
We assume that the base, as well as the configured Postgresql, you already have, it has tables with the primary key and you are ready to begin the configuration. I will make a reservation in advance that everything was done on different versions of the server Ubuntu, in Debian there will be about the same thing, but I don’t know what will happen in other distros.
So we need the postgresql-8.3-slony1 and slony1-bin packages. It is desirable that both packages be the second branch (it is more stable).
First, in postgresql.conf, you need to uncomment the line containing “listen_addresses” and put in place of localhost, specify the ip-address of the interface, looking in the direction of the slave. You can also put '*' instead of the address - then the server will wait for connections through all interfaces.
Next, in pg_hba.conf (also in / etc / postgresql lies) you need to add a line like
host all all 192.168.1.0/24 md5
, where instead of 192.168.0.0/24, you need to specify a network or ip slave. Then we do the same on the slave, indicating the ip master.
Then restart Postgresql on the master and the slave, so that it picked up the new settings.
We will also need to create pg superuser on both machines:
createuser -sP slony
In order not to increase the size of the post several times, the script is here
. I will say in advance that I do not claim a prize for the beauty of the code - on the contrary, it was written on my knee in my free time. The important thing is that it works. Now I will talk about what he does.
For normal operation, you will need to run it as root `a or via sudo (because there, su is used by the postgres user, as well as being written to / etc and the init scripts are executed) is also desirable (for the last two stages) ), so that the slave would let root ʻa via ssh (better with the key, since it will quickly get tired of entering the password). The libdbd-pg-perl package is also required.
The current version does not eat parameters from the command line (although it should), so after launching you enter into a dialogue with the script. So, the algorithm works step by step:
1. Poll user for cluster name and details to the master / slave.
If you make a mistake - it doesn't matter, in case of problems with connecting to the database, the script will offer to enter data again. The name of the cluster is arbitrary.
2. Dialogue of the choice of bases.
Everything is simple. The script polls the server for available databases and offers to select those that need to be replicated. To select (or deselect) a database, enter its number. You can select everything with an asterisk. At the end you need to press Enter.
3. Adding tables / sequences.
Having a list of selected databases, the script collects a list of tables from them. All tables and sequences in selected databases are added to the list.
4. Transferring the base structure from master to slave
The createdb, createlang deleted is executed sequentially. Next, the local pg_dump -s after which the structure is transmitted via ssh to the slave.
5. Immediate generation of the config and its forwarding.
All the necessary data is there, now the config is generated and, if you are root, put in / etc / slony1 (including the slave), edit / etc / default / slony1 (indicating which nodes should be started).
6. Run the daemon
Well, that's all. It remains to run. :)
I hope it will be useful to someone if you have questions or suddenly something works wrong - well in the comments.