📜 ⬆️ ⬇️

Deploying the Postgres-xl Cluster for Dummies

Hello. I want to share with habrovchanami my experience in deploying a cluster of Postgres-xl in the form of mini-instructions for dummies. There are not many articles and manuals on deploying a postgres-xl cluster, but enough. And they all have a couple of significant flaws in the eyes of a person like me, who has never worked on clustering and, moreover, has never worked in Linux-like axes before. All articles of this kind are written for people already more or less familiar with Linux and deploying postgresql / postgres-xl in such an environment.

Therefore, there was a desire to share with the rest of their achievements. Next, I will step by step describe the entire deployment process, from downloading postgres-xl sources and compiling them, to configuring the cluster.

Since there are a lot of “for experienced” articles already written, and in Habré too, I will omit the description of Postgres-xl itself, its components and their types (roles).

Part 1. Preparing the environment


For the test cluster, a configuration of 4 nodes was chosen: GTM, GTM-Standby and 2 nodes (GTM-proxy, Coordinator, Datanode):
')

All nodes are virtualized machines with 1024 MB of RAM and a processor with a frequency of 2.1Ghz. In choosing the OS distribution, I stopped at the latest version of CentOS 7.0, I will also omit its installation. Installed the Minimal version.

Part 2. Installing dependencies


So, we have 4 clean machines with CentOS installed. Before we start downloading sources from sourceforge, we first install the packages necessary for compiling the sources themselves.

# yum install -y wget vim gcc make kernel-devel perl-ExtUtils-MakeMaker perl-ExtUtils-Embed readline-devel zlib-devel openssl-devel pam-devel libxml2-devel openldap-devel tcl-devel python-devel flex bison docbook-style-dsssl libxslt 

Since we have a clean installation of CentOS, then I added to this step the installation of wget - the download manager and vim - text editor. Also, after installing the packages, it will not be superfluous to update the remaining packages with the command:

 # yum update -y 

Waiting for the end of the update, proceed to the next part of the process.

Part 3. Download source code, compile it and install


To download the source code, execute the command:

 # wget http://sourceforge.net/projects/postgres-xl/files/latest/download # mv download pgxl-9.2.src.tar.gz 

Or so:

 # wget http://sourceforge.net/projects/postgres-xl/files/latest/download -O pgxl-9.2.src.tar.gz 

Copy the downloaded archive into the desired folder and unpack:

 # cp pgxl-9.2.src.tar.gz /usr/local/src/ # cd /usr/local/src/ # tar -xzvf pgxl-9.2.src.tar.gz 

The archive is unpacked into the postgres-xl folder, we check it with the command:

 # ls 

To compile the sources and install and run them later, we need a non-root user account, for example:

 # useradd postgres # passwd postgres 

Next, enter and repeat the password, then grant the rights to this user on the entire source folder:

 # chown -R postgres.postgres postgres-xl # cd postgres-xl 

Now you need to configure source files with the help of ./configure before you start compiling them, I used this command with the following options:

 # ./configure --with-tcl --with-perl --with-python --with-pam --with-ldap --with-openssl --with-libxml 

More information about these options can be found on the official documentation page, here .

If you do not need any module, you can not install it at the stage of installing dependencies, or use the standard configuration:

 # ./configure 

In order for compiled sources to be portable (in order not to perform all the previous steps on each of the cluster nodes), you need to add a couple more parameters --prefix and --disable-rpath. As a result, the command for installation with default parameters will look like this:

 # ./configure --prefix=/usr/local/pgsql --disable-rpath 

The parameter --prefix is the installation path, it is '/ usr / local / pgsql' by default
Parameter --disable-rpath - this parameter makes compiled source code portable.

Now you can proceed directly to the compilation itself, you need to perform it on behalf of the user who was created earlier:

 # su postgres $ gmake world 

or

 # su postgres -c 'gmake world' 

If the compilation was successful, the last line in the log should look like this:

 Postgres-XL, contrib and HTML documentation successfully made.  Ready to install.


Everything! Everything is compiled, you can copy the / usr / local / src / postgres-xl folder to the rest of the cluster nodes and install.

Installation takes place on command:

 # gmake install-world 

Repeat this command on all nodes of the cluster and proceed to the configuration.

Part 4. Configuration


First you need to make some post-installation settings. Declaring environment variables:

 # echo 'export PGUSER=postgres' >> /etc/profile # echo 'export PGHOME=/usr/local/pgsql' >> /etc/profile # echo 'export PATH=$PATH:$PGHOME/bin' >> /etc/profile # echo 'export LD_LIBRARY_PATH=$PGHOME/lib' >> /etc/profile 

Then you need to re-log. Logout do the team:

 # exit 

Now we proceed to setting up the cluster nodes. To begin, create a folder with data and initialize it in accordance with the role of the server.

GTM1 / GTM2 :

 # mkdir $PGHOME/gtm_data # chown -R postgres.postgres $PGHOME/gtm_data # su - postgres -c "initgtm -Z gtm -D $PGHOME/gtm_data" 

NODE1 :

 # mkdir -p $PGHOME/data/data_gtm_proxy1 # mkdir -p $PGHOME/data/data_coord1 # mkdir -p $PGHOME/data/data_datanode1 # chown -R postgres.postgres $PGHOME/data/ # su - postgres -c "initdb -D $PGHOME/data/data_coord1/ --nodename coord1" # su - postgres -c "initdb -D $PGHOME/data/data_datanode1/ --nodename datanode1" # su - postgres -c "initgtm -D $PGHOME/data/data_gtm_proxy1/ -Z gtm_proxy" 

NODE2 :

 # mkdir -p $PGHOME/data/data_gtm_proxy2 # mkdir -p $PGHOME/data/data_coord2 # mkdir -p $PGHOME/data/data_datanode2 # chown -R postgres.postgres $PGHOME/data/ # su - postgres -c "initdb -D $PGHOME/data/data_coord2/ --nodename coord2" # su - postgres -c "initdb -D $PGHOME/data/data_datanode2/ --nodename datanode2" # su - postgres -c "initgtm -D $PGHOME/data/data_gtm_proxy2/ -Z gtm_proxy" 

Next, edit the configuration files on the cluster nodes.

GTM1 :

gtm.conf
 # vi $PGHOME/gtm_data/gtm.conf nodename = 'gtm_master' listen_addresses = '*' port = 6666 startup = ACT log_file = 'gtm.log' log_min_messages = WARNING 

GTM2 :

gtm.conf
 # vi $PGHOME/gtm_data/gtm.conf nodename = 'gtm_slave' listen_addresses = '*' port = 6666 startup = STANDBY active_host = 'GTM1' #   IP  GTM ,    '192.168.1.100' active_port = 6666 log_file = 'gtm.log' log_min_messages = WARNING 

NODE1 :

GTM_PROXY:
gtm_proxy.conf
 # vi $PGHOME/data/data_gtm_proxy1/gtm_proxy.conf nodename = 'gtm_proxy1' listen_addresses = '*' port = 6666 gtm_host = 'GTM1' gtm_port = 6666 log_file = 'gtm_proxy1.log' log_min_messages = WARNING 



COORDINATOR1
postgresql.conf
 # vi $PGHOME/data/data_coord1/postgresql.conf listen_addresses = '*' port = 5432 pooler_port = 6667 gtm_host = 'localhost' #    /  gtm_proxy,    -  localhost gtm_port = 6666 pgxc_node_name = 'coord1' 


pg_hba.conf
 # vi $PGHOME/data/data_coord1/pg_hba.conf host all all 192.168.1.0/24 trust 


DATANODE1
postgresql.conf
 # vi $PGHOME/data/data_datanode1/postgresql.conf listen_addresses = '*' port = 15432 pooler_port = 6668 gtm_host = 'localhost' gtm_port = 6666 pgxc_node_name = 'datanode1' 



pg_hba.conf
 # vi $PGHOME/data/data_datanode1/pg_hba.conf host all all 192.168.1.0/24 trust 

NODE2 :

GTM_PROXY:
gtm_proxy.conf
 # vi $PGHOME/data/data_gtm_proxy2/gtm_proxy.conf nodename = 'gtm_proxy2' listen_addresses = '*' port = 6666 gtm_host = 'GTM1' gtm_port = 6666 log_file = 'gtm_proxy2.log' log_min_messages = WARNING 

COORDINATOR2
postgresql.conf
 # vi $PGHOME/data/data_coord2/postgresql.conf listen_addresses = '*' port = 5432 pooler_port = 6667 gtm_host = 'localhost' gtm_port = 6666 pgxc_node_name = 'coord2' 

pg_hba.conf
 # vi $PGHOME/data/data_coord2/pg_hba.conf host all all 192.168.1.0/24 trust 


DATANODE2
postgresql.conf
 # vi $PGHOME/data/data_datanode2/postgresql.conf listen_addresses = '*' port = 15432 pooler_port = 6668 gtm_host = 'localhost' gtm_port = 6666 pgxc_node_name = 'datanode2' 



pg_hba.conf
 # vi $PGHOME/data/data_datanode2/pg_hba.conf host all all 192.168.1.0/24 trust 



This is where the work with configs is completed. The next step is to add exceptions to the CentOS firewall on all hosts:

 # firewall-cmd --zone=public --add-port=5432/tcp --permanent # firewall-cmd --zone=public --add-port=15432/tcp --permanent # firewall-cmd --zone=public --add-port=6666/tcp --permanent # firewall-cmd --zone=public --add-port=6667/tcp --permanent # firewall-cmd --zone=public --add-port=6668/tcp --permanent # firewall-cmd --reload 

However, for GTM1 / GTM2 machines it will be enough to open only 6666 port.

Part 5. Running Cluster Nodes


Now we got directly to the launch of the cluster nodes. To start the cluster nodes, you need to run the following commands on the appropriate nodes on behalf of the postgres user:

 # su - postgres $ gtm_ctl start -Z gtm -D $PGHOME/{data_dir} $ gtm_ctl start -Z gtm_proxy -D $PGHOME/{data_dir} $ pg_ctl start -Z datanode -D $PGHOME/{data_dir} $ pg_ctl start -Z coordinator -D $PGHOME/{data_dir} 

Where ' {data_dir} ' is the name of the corresponding folder for GTM: ' data / gtm_data ', for datanode1 it is: ' data / data_datanode1 / ', etc.

But I want to show you a different, more convenient way to control start / stop / startup.
In the source folder there is a SysV script for PostgreSQL “elegant control”. Our task is to adapt it for each role of nodes in the cluster. Let's see what the script itself is:

src / postgres-xl / contrib / start-scripts / linux
 # cat /usr/local/src/postgres-xl/contrib/start-scripts/linux #! /bin/sh # chkconfig: 2345 98 02 # description: PostgreSQL RDBMS # This is an example of a start/stop script for SysV-style init, such # as is used on Linux systems. You should edit some of the variables # and maybe the 'echo' commands. # # Place this file at /etc/init.d/postgresql (or # /etc/rc.d/init.d/postgresql) and make symlinks to # /etc/rc.d/rc0.d/K02postgresql # /etc/rc.d/rc1.d/K02postgresql # /etc/rc.d/rc2.d/K02postgresql # /etc/rc.d/rc3.d/S98postgresql # /etc/rc.d/rc4.d/S98postgresql # /etc/rc.d/rc5.d/S98postgresql # Or, if you have chkconfig, simply: # chkconfig --add postgresql # # Proper init scripts on Linux systems normally require setting lock # and pid files under /var/run as well as reacting to network # settings, so you should treat this with care. # Original author: Ryan Kirkpatrick <pgsql@rkirkpat.net> # contrib/start-scripts/linux ## EDIT FROM HERE # Installation prefix prefix=/usr/local/pgsql # Data directory PGDATA="/usr/local/pgsql/data" # Who to run the postmaster as, usually "postgres". (NOT "root") PGUSER=postgres # Where to keep a log file PGLOG="$PGDATA/serverlog" # It's often a good idea to protect the postmaster from being killed by the # OOM killer (which will tend to preferentially kill the postmaster because # of the way it accounts for shared memory). Setting the OOM_SCORE_ADJ value # to -1000 will disable OOM kill altogether. If you enable this, you probably # want to compile PostgreSQL with "-DLINUX_OOM_SCORE_ADJ=0", so that # individual backends can still be killed by the OOM killer. #OOM_SCORE_ADJ=-1000 # Older Linux kernels may not have /proc/self/oom_score_adj, but instead # /proc/self/oom_adj, which works similarly except the disable value is -17. # For such a system, enable this and compile with "-DLINUX_OOM_ADJ=0". #OOM_ADJ=-17 ## STOP EDITING HERE # The path that is to be used for the script PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # What to use to start up the postmaster. (If you want the script to wait # until the server has started, you could use "pg_ctl start -w" here. # But without -w, pg_ctl adds no value.) DAEMON="$prefix/bin/postmaster" # What to use to shut down the postmaster PGCTL="$prefix/bin/pg_ctl" set -e # Only start if we can find the postmaster. test -x $DAEMON || { echo "$DAEMON not found" if [ "$1" = "stop" ] then exit 0 else exit 5 fi } # Parse command line parameters. case $1 in start) echo -n "Starting PostgreSQL: " test x"$OOM_SCORE_ADJ" != x && echo "$OOM_SCORE_ADJ" > /proc/self/oom_score_adj test x"$OOM_ADJ" != x && echo "$OOM_ADJ" > /proc/self/oom_adj su - $PGUSER -c "$DAEMON -D '$PGDATA' &" >>$PGLOG 2>&1 echo "ok" ;; stop) echo -n "Stopping PostgreSQL: " su - $PGUSER -c "$PGCTL stop -D '$PGDATA' -s -m fast" echo "ok" ;; restart) echo -n "Restarting PostgreSQL: " su - $PGUSER -c "$PGCTL stop -D '$PGDATA' -s -m fast -w" test x"$OOM_SCORE_ADJ" != x && echo "$OOM_SCORE_ADJ" > /proc/self/oom_score_adj test x"$OOM_ADJ" != x && echo "$OOM_ADJ" > /proc/self/oom_adj su - $PGUSER -c "$DAEMON -D '$PGDATA' &" >>$PGLOG 2>&1 echo "ok" ;; reload) echo -n "Reload PostgreSQL: " su - $PGUSER -c "$PGCTL reload -D '$PGDATA' -s" echo "ok" ;; status) su - $PGUSER -c "$PGCTL status -D '$PGDATA'" ;; *) # Print help echo "Usage: $0 {start|stop|restart|reload|status}" 1>&2 exit 1 ;; esac exit 0 

For all roles, copy this script into the directory ' /etc/rc.d/init.d/ ' with some distinct name.
I did something like this:

 # cp /usr/local/src/postgres-xl/contrib/start-scripts/linux /etc/rc.d/init.d/pgxl_gtm # cp /usr/local/src/postgres-xl/contrib/start-scripts/linux /etc/rc.d/init.d/pgxl_gtm_prx # cp /usr/local/src/postgres-xl/contrib/start-scripts/linux /etc/rc.d/init.d/pgxl_dn # cp /usr/local/src/postgres-xl/contrib/start-scripts/linux /etc/rc.d/init.d/pgxl_crd 

Next, we begin to adapt the scripts for each specific instance on each node. After some minor modifications, the GTM script looked like this (for convenience, I removed the comments and insignificant areas):

pgxl_gtm
 # vi /etc/rc.d/init.d/pgxl_gtm #! /bin/sh # chkconfig: 2345 98 02 # description: PostgreSQL RDBMS # Installation prefix prefix=/usr/local/pgsql # Data directory PGDATA="$prefix/gtm_data" # Who to run the postmaster as, usually "postgres". (NOT "root") PGUSER=postgres # Where to keep a log file PGLOG="$PGDATA/serverlog" # The path that is to be used for the script PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:$prefix/bin # What to use to shut down the postmaster PGCTL="$prefix/bin/gtm_ctl" # Which cluster role PGROLE="gtm" set -e # Only start if we can find the postmaster. test -x $PGCTL || { echo "$PGCTL not found" if [ "$1" = "stop" ] then exit 0 else exit 5 fi } # Parse command line parameters. case $1 in start) echo -n "Starting PostgreSQL: " test x"$OOM_SCORE_ADJ" != x && echo "$OOM_SCORE_ADJ" > /proc/self/oom_score_adj test x"$OOM_ADJ" != x && echo "$OOM_ADJ" > /proc/self/oom_adj su - $PGUSER -c "$PGCTL start -Z $PGROLE -D '$PGDATA' &" >>$PGLOG 2>&1 echo "ok" ;; stop) echo -n "Stopping PostgreSQL: " su - $PGUSER -c "$PGCTL stop -Z $PGROLE -D '$PGDATA' -m fast" echo "ok" ;; restart) echo -n "Restarting PostgreSQL: " su - $PGUSER -c "$PGCTL stop -Z $PGROLE -D '$PGDATA' -m fast -w" test x"$OOM_SCORE_ADJ" != x && echo "$OOM_SCORE_ADJ" > /proc/self/oom_score_adj test x"$OOM_ADJ" != x && echo "$OOM_ADJ" > /proc/self/oom_adj su - $PGUSER -c "$PGCTL start -Z $PGROLE -D '$PGDATA' &" >>$PGLOG 2>&1 echo "ok" ;; reload) echo -n "Reload PostgreSQL: " su - $PGUSER -c "$PGCTL restart -Z $PGROLE -D '$PGDATA'" echo "ok" ;; status) su - $PGUSER -c "$PGCTL status -Z $PGROLE -D '$PGDATA'" ;; *) # Print help echo "Usage: $0 {start|stop|restart|reload|status}" 1>&2 exit 1 ;; esac exit 0 

As you can see, I added ' $ PGHOME / bin ' to the PATH variable, removed DAEMON, and in the PGCTL I registered the path to the gtm_ctl utility in the ' $ PGHOME / bin ' directory for managing GTM and GTM_PROXY roles, also added the PGROLE variable necessary for launching nodes cluster.

In order to use such a script for the remaining roles in the cluster, you need to edit only 3 variables: PGDATA, PGROLE, PGCTL.

PGDATA is the path to the data directory for this node role.
PGROLE - the role of this instance in the cluster. It happens gtm, gtm_proxy, coordinator, datanode.
PGCTL is a server startup utility, for gtm and gtm_proxy it is ' gtm_ctl ', and for coordinator and datanode it is ' pg_ctl '

Here are the complete changes for the remaining nodes in our test cluster:

GTM_PROXY1 :
pgxl_gtm_prx
 # vi /etc/rc.d/init.d/pgxl_gtm_prx PGDATA="$prefix/data/data_gtm_proxy1" PGCTL="$prefix/bin/gtm_ctl" PGROLE="gtm_proxy" 


GTM_PROXY2 :
pgxl_gtm_prx
 # vi /etc/rc.d/init.d/pgxl_gtm_prx PGDATA="$prefix/data/data_gtm_proxy2" PGCTL="$prefix/bin/gtm_ctl" PGROLE="gtm_proxy" 


COORDINATOR1 :
pgxl_crd
 # vi /etc/rc.d/init.d/pgxl_crd PGDATA="$prefix/data/data_coord1" PGCTL="$prefix/bin/pg_ctl" PGROLE="coordinator" 


COORDINATOR2 :
pgxl_crd
 # vi /etc/rc.d/init.d/pgxl_crd PGDATA="$prefix/data/data_coord2" PGCTL="$prefix/bin/pg_ctl" PGROLE="coordinator" 


DATANODE1 :
pgxl_dn
 # vi /etc/rc.d/init.d/pgxl_dn PGDATA="$prefix/data/data_datanode1" PGCTL="$prefix/bin/pg_ctl" PGROLE="datanode" 


DATANODE2 :
pgxl_dn
 # vi /etc/rc.d/init.d/pgxl_dn PGDATA="$prefix/data/data_datanode2" PGCTL="$prefix/bin/pg_ctl" PGROLE="datanode" 


Almost done! Now we need to make these scripts executable by running the corresponding command on each node:

 # chmod a+x /etc/rc.d/init.d/pgxl_gtm # chmod a+x /etc/rc.d/init.d/pgxl_gtm_prx # chmod a+x /etc/rc.d/init.d/pgxl_crd # chmod a+x /etc/rc.d/init.d/pgxl_dn 

Now add scripts to load:

 # chkconfig --add pgxl_gtm # chkconfig --add pgxl_gtm_prx # chkconfig --add pgxl_crd # chkconfig --add pgxl_dn 

And run:

 # service pgxl_gtm start # service pgxl_gtm_prx start # service pgxl_crd start # service pgxl_dn start 

How the launch went you can see in the log file in the data directory, or you can run the command:

 # service pgxl_gtm status # service pgxl_gtm_prx status # service pgxl_crd status # service pgxl_dn status 

If everything went successfully proceed to configure the nodes.

Part 6. Configuring Cluster Nodes


Perform the configuration of the cluster nodes in accordance with the manual:

NODE1
 # su - postgres $ psql -p 5432 -c "DELETE FROM pgxc_node" $ psql -p 5432 -c "CREATE NODE coord1 WITH (TYPE='coordinator',HOST='192.168.1.102',PORT=5432)" $ psql -p 5432 -c "CREATE NODE coord2 WITH (TYPE='coordinator',HOST='192.168.1.103',PORT=5432)" $ psql -p 5432 -c "CREATE NODE datanode1 WITH (TYPE='datanode',HOST='192.168.1.102',PORT=15432)" $ psql -p 5432 -c "CREATE NODE datanode2 WITH (TYPE='datanode',HOST='192.168.1.103',PORT=15432)" 

Check what happened with the command:

 $ psql -p 5432 -c "select * from pgxc_node" 

If everything is fine, restart the pool:

 $ psql -p 5432 -c "select pgxc_pool_reload()" 

If the configuration is successful, the command will return ' t ', that is, true .

After this step, most manuals start creating test tables and performing test queries, but with a guarantee of 99.9%, I will tell you - when you try to perform INSERT, you will get these records in the logs:
 STATEMENT: insert into test select 112233445566, 0123456789;
 ERROR: Invalid Datanode number

or here
 STATEMENT: SET global_session TO coord2_21495; SET datestyle TO iso; SET client_min_messages TO notice; SET client_encoding TO UNICODE; SET bytea_output TO escape;
 ERROR: Invalid Datanode number
 STATEMENT: Remote Subplan
 ERROR: node "coord2_21580" does not exist
 STATEMENT: SET global_session TO coord2_21580; SET datestyle TO iso; SET client_min_messages TO notice; SET client_encoding TO UNICODE; SET bytea_output TO escape;
 ERROR: Invalid Datanode number
 STATEMENT: Remote Subplan
 ERROR: Invalid Datanode number
 STATEMENT: Remote Subplan
 ERROR: Invalid Datanode number
 STATEMENT: Remote Subplan
 LOG: Will fall back to local snapshot for XID = 96184, source = 0, gxmin = 0, autovac launch = 0, autovac = 0, normProcMode = 0, postEnv = 1
 ERROR: node "coord2_22428" does not exist
 STATEMENT: SET global_session TO coord2_22428;
 ERROR: Invalid Datanode number

And this is because in abstruse manuals “for experienced”, where everything is just like two fingers on the asphalt, an important step is missed - filling in the other nodes in DATANODEs themselves. And this is done quite simply, on both data nodes in our configuration we do the following:

 $ psql -p 5432 -c "EXECUTE DIRECT ON (datanode1) 'DELETE FROM pgxc_node'" $ psql -p 5432 -c "EXECUTE DIRECT ON (datanode1) 'create NODE coord1 WITH (TYPE=''coordinator'',HOST=''192.168.1.102'',PORT=5432)'" $ psql -p 5432 -c "EXECUTE DIRECT ON (datanode1) 'create NODE coord2 WITH (TYPE=''coordinator'',HOST=''192.168.1.103'',PORT=5432)'" $ psql -p 5432 -c "EXECUTE DIRECT ON (datanode1) 'create NODE datanode1 WITH (TYPE=''datanode'',HOST=''192.168.1.102'',PORT=15432)'" $ psql -p 5432 -c "EXECUTE DIRECT ON (datanode1) 'create NODE datanode2 WITH (TYPE=''datanode'',HOST=''192.168.1.103'',PORT=15432)'" $ psql -p 5432 -c "EXECUTE DIRECT ON (datanode1) 'SELECT pgxc_pool_reload()'" 

Respectively line
 EXECUTE DIRECT ON (datanode1)

change to
 EXECUTE DIRECT ON (datanode2)

for node number 2.

And voila! Now you can safely create tables and test our cluster. But that's another story ...

Conclusion


That's all, everything is set up and everything works, it would seem - there is nothing complicated, this article hides a whole week of searching and smoking manuals. The download / compile and source installation stage seems to be the most innocuous now, but in fact there were also some problems there (of course, it was my inexperience in working on such an environment), for example, the code stubbornly did not want to compile and threw such an error:

 '/ usr / bin / perl' /bin/collateindex.pl -f -g -i 'bookindex' -o bookindex.sgml HTML.index
 Can't open perl script "/bin/collateindex.pl": No such file or directory
 make [4]: ​​*** [bookindex.sgml] Error 2
 make [4]: ​​Leaving directory `/ usr / local / src / postgres-xl / doc-xc / src / sgml '
 make [3]: *** [sql_help.h] Error 2
 make [3]: Leaving directory `/ usr / local / src / postgres-xl / src / bin / psql '
 make [2]: *** [all-psql-recurse] Error 2
 make [2]: Leaving directory `/ usr / local / src / postgres-xl / src / bin '
 make [1]: *** [all-bin-recurse] Error 2
 make [1]: Leaving directory `/ usr / local / src / postgres-xl / src '
 make: *** [all-src-recurse] Error 2

Later, in some Chinese forum I found the answer that I need to install the docbook-style-dsssl library, and so on, every new surprise brought me to a standstill due to lack of experience and complete manuals (for dummies like me) as such.

But still, after a week of searching for information, hundreds of trial and error, everything worked out and the cluster started up.
I hope someone this publication even a little bit will make life easier or will be useful.

Next, I plan to do the Load-Balance setup, migrate the base from the usual PostgreSQL 9.4 running Windows to the assembled postgres-xl 9.2 cluster on CentOS 7.0, how to test the hardest queries in our project already in the cluster, compare with the results of Standalone PostgreSQL, do tuning cluster settings, play with PostGIS in a cluster, etc. So, if this article or any of the things that I have listed is useful to the Habrovans, I will be happy to share it with you.

Thanks for attention.

Source: https://habr.com/ru/post/261457/


All Articles