📜 ⬆️ ⬇️

Failover Cluster for Java Applications

Some time has passed since the launch of the project and it is time to increase the computing power for the application to work. It was decided to build a cluster for this, which can later be easily scaled. Thus, we need to set up a cluster to distribute requests between servers.

For this we will use 4 servers on Linux CentOS 5.5, as well as Apache, Tomcat6, mod_jk, Heartbeat.
web1, web2 servers - for distributing requests by means of Apache and fault tolerance by means of Heartbeat. tomcat1, tomcat2 server - tomcat server for java application.

Software installation:

We put Apache and Heartbeat
')
[root@web1 opt]$ yum -y install httpd heartbeat
[root@web2 opt]$ yum -y install httpd heartbeat


Since there is no latest stable version of Tomcat in the repository, I prefer to download it from the mirror

[root@tomcat1 opt]$ wget apache.vc.ukrtel.net/tomcat/tomcat-7/v7.0.21/bin/apache-tomcat-7.0.21.tar.gz
[root@tomcat1 opt]$ tar xvfz apache-tomcat-7.0.21.tar.gz
[root@tomcat1 opt]$ mkdir tomcat $$ mv apache-tomcat-7.0.21 tomcat
[root@tomcat1 opt]$ rmdir apache-tomcat-7.0.21
[root@tomcat1 opt]$ ln -s /opt/tomcat/bin/catalina.sh /etc/init.d/tomcat

[root@tomcat2 opt]$ wget apache.vc.ukrtel.net/tomcat/tomcat-7/v7.0.21/bin/apache-tomcat-7.0.21.tar.gz
[root@tomcat2 opt]$ tar xvfz apache-tomcat-7.0.21.tar.gz
[root@tomcat2 opt]$ mkdir tomcat $$ move apache-tomcat-7.0.21 tomcat
[root@tomcat2 opt]$ rmdir apache-tomcat-7.0.21
[root@tomcat2 opt]$ ln -s /opt/tomcat/bin/catalina.sh /etc/init.d/tomcat


In order to teach Apache on web1 and web2 servers to distribute the load between the tomcat1 and tomcat2 servers, you need to connect the mod_jk module to Apache.
Download mod_jk for your version of Apache, rename it and move it to the / etc / httpd / modules directory.

[root@web1 opt]$ wget archive.apache.org/dist/tomcat/tomcat-connectors/jk/binaries/linux/jk-1.2.31/i386/mod_jk-1.2.31-httpd-2.2.x.so
[root@web1 opt]$ move mod_jk-1.2.31-httpd-2.2.x.so /etc/httpd/modules/mod_jk.so

[root@web2 opt]$ wget archive.apache.org/dist/tomcat/tomcat-connectors/jk/binaries/linux/jk-1.2.31/i386/mod_jk-1.2.31-httpd-2.2.x.so
[root@web2 opt]$ move mod_jk-1.2.31-httpd-2.2.x.so /etc/httpd/modules/mod_jk.so


We do the same actions for the app2 server.

Heartbeat setup:

[root@web1 opt]$ touch /etc/ha.d/authkeys
[root@web1 opt]$ touch /etc/ha.d/ha.cf
[root@web1 opt]$ touch /etc/ha.d/haresources


We set the parameter to be read only by the root user to the file /etc/ha.d/authkeys, otherwise heartbeat will not start.

[root@web1 ha.d]$ chmod 600 /etc/ha.d/authkeys

And add to its 2 lines. The file must be identical on both nodes.

[root@web1 ha.d]$ nano authkeys

auth 2
2 sha1 your-password


We make the same actions for the web2 server.

Edit the /etc/ha.d/ha.cf file. The file must be identical on both nodes.

[root@web1 ha.d]$ nano ha.cf

logfacility local0
keepalive 2
deadtime 10
initdead 120
bcast eth0
udpport 694
auto_failback on
node web1
node web2
respawn hacluster /usr/lib/heartbeat/ipfail
use_logd yes
logfile /var/log/ha.log
debugfile /var/log/ha-debug.log


We make the same actions for the web2 server.

Edit the /etc/ha.d/haresources file. The file must be identical on both nodes.

[root@web1 ha.d]$ nano haresources

web1 192.168.0.1 httpd # ip .


We make the same actions for the web2 server.

Balancing setting:

Add the lines to the /etc/httpd/conf/httpd.conf file of both web servers:

LoadModule jk_module modules/mod_jk.so

JkWorkersFile conf/workers.properties
JkLogFile logs/mod_jk.log
JkLogLevel info
JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
JkRequestLogFormat "%w %V %T"


In the section DocumentRoot add the following two lines

JkMount /*.jsp loadbalancer
JkMount /servlet/* loadbalancer


In the / etc / httpd / conf folder of both web servers we create the file workers.properties.

[root@web1 conf]$ touch workers.properties
[root@web2 conf]$ touch workers.properties


Add the following lines to them:

worker.list=tomcat1, tomcat2, loadbalancer

worker.tomcat1.port=10010
worker.tomcat1.host=192.168.1.1
worker.tomcat1.type=ajp13
worker.tomcat1.lbfactor=1

worker.tomcat2.port=10020
worker.tomcat2.host=192.168.1.2
worker.tomcat2.type=ajp13
worker.tomcat2.lbfactor=1

worker.loadbalancer.type=lb
worker.loadbalancer.balanced_workers=tomcat1, tomcat2


In /opt/tomcat/conf/server.xml of both Tomcat'es we configure ports (all ports must be different):

[root@tomcat1 conf]$ nano server.xml

<server port="8005" ...>

<! -- HTTP , .. AJP
<connector port="8080" protocol="HTTP/1.1" ... />
-->

<connector port="10010" protocol="AJP/1.3" ... />

[root@tomcat2 conf]$ nano server.xml

<server port="8006" ...>

<! -- HTTP , .. AJP
<connector port="8080" protocol="HTTP/1.1" ... />
-->

<connector port="10020" protocol="AJP/1.3" ... />



Configuring session replication

In order that when one of the Tomcat servers is dropped, the user's session is not destroyed, it makes sense to set up session replication between Tomcat servers. To do this, add the following lines to /opt/tomcat/conf/server.xml in the section "<Engine name =" Catalina "defaultHost =" localhost ">" of all Tomcat's:

[root@tomcat1 conf]$ nano server.xml

<ngine name="Catalina" defaultHost="localhost" debug="0" jvmRoute="tomcat1">

<Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
<membership
className="org.apache.catalina.cluster.mcast.McastService"
mcastAddr="228.0.0.4"
mcastBindAddress="127.0.0.1"
mcastPort="45564"
mcastFrequency="500"
mcastDropTime="3000"/>
<receiver
className="org.apache.catalina.cluster.tcp.ReplicationListener"
tcpListenAddress="auto"
tcpListenPort="4001"
tcpSelectorTimeout="100"
tcpThreadCount="6"/>
/>

[root@tomcat2 conf]$ nano server.xml

<ngine name="Catalina" defaultHost="localhost" debug="0" jvmRoute="tomcat2">

<Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
<membership
className="org.apache.catalina.cluster.mcast.McastService"
mcastAddr="228.0.0.4"
mcastBindAddress="127.0.0.1"
mcastPort="45564"
mcastFrequency="500"
mcastDropTime="3000"/>
<receiver
className="org.apache.catalina.cluster.tcp.ReplicationListener"
tcpListenAddress="auto"
tcpListenPort="4002"
tcpSelectorTimeout="100"
tcpThreadCount="6"/>
/>




This completes the configuration of the fault-tolerant and productive cluster for Java servlets. We have achieved fault tolerance and scalable performance, which will allow us to easily add new nodes to the cluster in case of lack of performance.

Source: https://habr.com/ru/post/129377/


All Articles