postgres
user and distribute them to the postgres
users on each of the servers.root
on the scaling node and send the master and slave servers to postgres
users.$PGDATA/postgresql.conf
configuration file as follows: listen_addresses = '*' wal_level = hot_standby max_wal_senders = 2 wal_keep_segments = 32 #hot_standby = on
$PGDATA/pg_hba.conf
: host replication postgres 192.168.100.2/32 trust host replication postgres 192.168.100.3/32 trust
# service postgresql restart
# service postgresql stop
postgres
user, we create a backup database and send it to the slave server: $ psql -c "SELECT pg_start_backup('stream');" $ rsync -a /var/lib/pgsql/data/ 192.168.100.3:/var/lib/pgsql/data/ --exclude postmaster.pid $ psql -c "SELECT pg_stop_backup();"
$PGDATA/recovery.conf
: standby_mode = 'on' primary_conninfo = 'host=192.168.100.2 port=5432 user=postgres' trigger_file = 'failover'
trigger_file
parameter is responsible for the way in which PostgreSQL searches for a file in order to switch to master mode. In this case, PostgreSQL searches for a file along the path $PGDATA/failover
. $ sed -i 's/#hot_standby = on/hot_standby = on/' /var/lib/pgsql/data/postgresql.conf
# service postgresql start
$ ps aux | grep sender
2561 ? Ss 0:00 postgres: wal sender process postgres 192.168.100.3(33341) streaming 0/2031D28
$ ps aux | grep receiver
1524 ? Ss 0:00 postgres: wal reciever process streaming 0/2031D28
/etc/pgpool-II/pgpool.conf
: # listen_addresses = '*' # backend_hostname0 = '192.168.100.3' backend_port0 = 5432 backend_weight0 = 1 backend_data_directory0 = '/var/lib/pgsql/data' # backend_hostname1 = '192.168.100.2' backend_port1 = 5432 backend_weight1 = 1 backend_data_directory1 = '/var/lib/pgsql/data' # pool_hba.conf enable_pool_hba = true
/etc/pgpool-II/pool_hba.conf
add information about client authorization: host all all 127.0.0.1/32 trust host all all 192.168.100.2/32 trust host all all 192.168.100.3/32 trust
# service pgpool restart
pgpool-walrecrunning()
procedure is performed, which determines which server is the master and which slave./etc/pgpool-II/pgpool.conf
: # , failover_command = '/etc/pgpool-II/failover.sh %d %H /var/lib/pgsql/data/failover' # , health_check_user = 'postgres' # "-" master_slave_mode = true # , '' master_slave_sub_mode = 'stream' # pgpool , replication_mode = false # , load_balance_mode = true
failover_command
. The parameters specified in this line are passed to the parameters %d
- the identifier of the fallen node (according to backend_hostname
in pgpool.conf
), %H
- the IP of the new master server.failover.sh
: #! /bin/bash # ID FAILED_NODE=$1 # IP NEW_MASTER=$2 # TRIGGER_FILE=$3 if [ $FAILED_NODE = 1 ]; then echo " " exit 1 fi echo " " echo " : $NEW_MASTER" ssh -T postgres@$NEW_MASTER touch $TRIGGER_FILE exit 0
/etc/pgpool-II/
directory and issue permissions 755.sql/pgpool-walrecrunning
contains the source code for the procedure we need. To compile it, you need PostgreSQL header files, after which you can use the make
command and get pgpool-walrecrunning.so
and the SQL download request for this procedure pgpool-walrecrunning.sql
./usr/lib64/pgsql/
, which is called $libdir
, the sql file in /usr/share/pgsql/
. psql -f /usr/share/pgsql/pgpool-walrecrunning.sql -d postgres
SHOW pool_nodes;
having previously logged into the psql
client on the scaling node. hostname | port | status | lb_weight ----------------------------------------------------- 192.168.100.3 | 5432 | 2 | 0.500000 192.168.100.2 | 5432 | 2 | 0.500000 (2 rows)
PGCTL
utility PGCTL
/etc/pgpool-II/pgpool.conf
: # , recovery_user = 'postgres' # recovery_password = '123456' # , $PGDATA recovery_1st_stage_command = 'basebackup.sh'
postgres
password hash: # pg_md5 123456 >> /etc/pgpool-II/pcp.conf
postgres
password. In addition, in addition to the password hash, you need to specify the user name, to whom this hash belongs, The file must contain the string postgres:enrypted_password
.basebackup.sh
script basebackup.sh
following content: #!/bin/bash # $PGDATA PRIMARY_DATA=$1 # IP- , SLAVE_IP=$2 # $PGDATA SLAVE_DATA=$3 # IP recovery.conf PRIMARY_IP=$(ifconfig eth0| sed -n '2 {s/^.*inet addr:\([0-9.]*\) .*/\1/;p}') # TMP_DIR=/var/lib/pgsql/tmp # ( - ) cd $PRIMARY_DATA rm -f recovery.* failover # , cat postgresql.conf | grep '#hot_standby = on' # , if [ $? = 1 ] then sed -i 's/hot_standby = on/#hot_standby = on/' postgresql.conf # /usr/bin/pg_ctl restart -D $PGDIR fi # ssh -T postgres@$SLAVE_IP "/usr/bin/pg_ctl stop -D $SLAVE_DATA" # backup psql -c "SELECT pg_start_backup('Streaming Replication', true)" postgres # rsync -a $PRIMARY_DATA/ $SLAVE_IP:$SLAVE_DATA/ --exclude postmaster.pid --exclude postmaster.opts # mkdir $TMP_DIR cd $TMP_DIR # postgresql.conf hot_standby cp $PRIMARY_DATA/postgresql.conf $TMP_DIR/ sed -i 's/#hot_standby = on/hot_standby = on/' postgresql.conf # recovery.conf echo "standby_mode = 'on'" > recovery.conf echo "primary_conninfo = 'host=$PRIMARY_IP port=5432 user=postgres'" >> recovery.conf echo "trigger_file = 'failover'" >> recovery.conf # ssh -T postgres@$SLAVE_IP rm -f $SLAVE_DATA/recovery.* # scp postgresql.conf postgres@$SLAVE_IP:$SLAVE_DATA/postgresql.conf scp recovery.conf postgres@$SLAVE_IP:$SLAVE_DATA/recovery.conf # backup psql -c "SELECT pg_stop_backup()" postgres # cd .. rm -fr $TMP_DIR
$PGDATA
directory. Script assign rights to 755.$PGDATA
directory create a script pgpool_remote_start
( It is with this name! ) With the following content: #! /bin/bash if [ $# -ne 2 ] then echo " , " exit 1 fi SLAVE_IP=$1 SLAVE_DIR=$2 PGCTL=/usr/bin/pg_ctl ssh -T $SLAVE_IP $PGCTL -w -D $SLAVE_DIR start 2>/dev/null 1>/dev/null < /dev/null &
pgpool-recovery.so
stored procedure located along the sql/pgpool-recovery
src path of the pgpool package. Similarly, send it to the working servers and download the procedure to the database: $ psql -f /usr/share/pgsql/pgpool-recovery.sql -d template1
pcp_recovery_node 20 192.168.100.4 9898 postgres 123456 1
pcp_recovery_node
. This command implements server recovery to cluster. 20
is the number of attempts to connect to the slave server, 192.168.100.4
is the IP node of the scaling node, 9898
is the 9898
port of the scaling node commands, postgres
is the name of the recovery user, 123456
is its password, 1
is the ID of the node being restored.Source: https://habr.com/ru/post/188096/
All Articles