What to do admin in the New Year holidays, if not setting up servers!
This article describes the general approach as you can:
- make a cluster on iptables
- configure cluster via GUI
fwbuilder- save user connections during failover using
conntrack-toolsThe general environment in which such cluster works for me:
- Internal network of backend and frontend servers
- Block of external IP addresses
- 2 servers for linux-based cluster (in my case, Fedora 13 x64_86): fw1 and fw2 in Master / Backup mode
')
Cluster tasks:
- gateway for the local network
- the publication of services on the external block of ip-addresses
In general, it works like this:
- the state of the cluster is monitored by the
ucarp service and
jerks the necessary scripts in case of failover
- the conntrackd service synchronizes information about connections between servers
- fwbuilder compiles the necessary scripts for iptables
Under the cut instructions for assembly with a file
Server Preparation
We install on fw1 and fw2 linux with a minimal set of packages, iptables already exists there.
Add:
yum install ucarp - heartbeat for the cluster
yum install conntrack-tools - connection tracker
yum install pssh (for scp utility)
Go to / etc / sysconfig / network-scripts / and configure the interfaces.
We assign only one personal ip-address per interface, for example:
eth0 - internal
eth3 is an interface for synchronizing information about connections between cluster servers.
For security reasons, it is recommended to connect the cluster server directly through this interface, since The conntrackd protocol is insecure.
The external interface will be configured from scripts.
Ucarp setup
When the ucarp process is running, each server will send VRRP packets to the multicast address 224.0.0.18
If the server does not receive packets from the partner, he believes that he is alone and starts upscript, which is registered in the /etc/init.d/ucarp file
UPSCRIPT=/usr/libexec/ucarp/vip-up
If the server is in the active state and receives packets from the partner, which is more important - the downscript starts and goes into the backup state.
DOWNSCRIPT=/usr/libexec/ucarp/vip-down
We will modify the upscript / downscript scripts later.
Next, configure the vip-address that will move when failover from one server to another.
The VIP address will be the address of the internal network gateway and the internal network for the exchange of VRRP packets.
Settings files:
/etc/ucarp/vip-common.conf
/etc/ucarp/vip-001.conf
(theoretically, there can be a lot of vip-addresses, but one is enough for us)
Thus, ucarp will manage the transition of the ip-address of the gateway during failover.
Unfortunately, ucarp is not at all the same thing as a carp on OpenBSD and two problems need to be solved:
- if failover, change the ARP for the ip-address of the gateway of all clients of the local network or make a common MAC for the servers in the cluster
- minimize the risk of split brain, i.e. if possible, avoid a situation where both servers think that their partner is dead and trying to become the main one.
The arping utility will help in solving the first problem.
As a recommendation to reduce the likelihood of a split brain, I can recommend first merging all the working interfaces in bonding, and then cutting the vlans for the internal and external networks.
This will help if something happens to physics on any of the interfaces.
Configuring iptables with fwbuilder
The fwbuilder website has enough detailed
documentation on using fwbuilder itself as a convenient tool for visual presentation of rules.
But fwbuilder doesn’t eliminate the need to know and understand how iptables works.
The order of use is as follows:
- drawing up rules
- compilation of the script
- copying the script via scp to the cluster server
- run the script via ssh
fwbuilder correctly compiles the rules: selects separate chains, makes sure that the rules do not overlap each other.
To create a cluster, read
the documentation section .
We create a cluster with a name, for example “fw-cluster”, in which there will be two objects like “iptables firewall”, for example, fw1 and fw2 (it is important that the name matches the result of the “hostname -s” command on the firewall server, because This will be taken into account later in the scripts)
In the State Sync Group properties, specify the type: conntrack so that fwbuilder adds access rules for conntrack packages
Clustered all interfaces.
This creates cluster objects, for example:
fw-cluster: eth0: members (internal network interface)
fw-cluster: eth1: members
fw-cluster: eth3: members
For the fw-cluster: eth0: members object, specify the VRRP type (again for access-rules).
For other objects, the type is not specified.
On the Script tab in the settings of the firewall object, you need to turn off all items except “Load iptables modules”.
This is due to the fact that the compiled script itself can customize interfaces and vlans, but during the use of this feature we encountered some bugs.
By default, fwbuilder will add rules for related and established connections.
After compiling, the fw1.fw and fw2.fw scripts appear.
In order for fwbuilder to install and run scripts on a remote server, one of the interfaces of the firewall object must be marked as manager.
Conntrack setup
In the documentation there is an example of setting just for our case.
We take as is the script primary-backup.sh
Configuring conntrackd.conf:
- in the section “Multicast” we indicate the interaction interface
- in the “Address Ignore” section, you can filter all connections to the own ip-addresses of the cluster servers, which during failover will not move anywhere
fwbuilder itself will add access rules for the ip-multicast address, so you do not need to change it in conntrackd.conf.
Configuring an external block of ip-addresses
The transition to failure of the external block of ip-addresses can be done in the same way as was done for the gateway address.
However, if the block of external addresses is large enough, firstly, arping will need to be started for each address and this may take a long time, and secondly, arping may not work if you do not control the equipment on the gateway for the external unit.
There is a solution - use the common MAC address for the external interface on the firewall.
Unfortunately, I could not google any working solutions, except the clusterip module for iptables, I will be glad if someone offers another way.
As an unpleasant consequence, you have to deal with multicast MAC addresses.
It is time to modify the upstart and updown scripts
Next to the scripts, for convenience, create a text file (s) in which ip-addresses without a mask are written line by line, for example,
eth1.addr.external
eth1 is the name of the external network interface
To enable the clusterip module to work before the rest of the rules for the external address block, write it to the mangle table
upscript
#!/bin/sh
# - fwbuilder
ROOT="/etc/fw"
# conntrackd, primary
/etc/conntrackd/primary-backup.sh primary
# fwbuilder
$ROOT/$(hostname -s).fw start
# ip-
# $2 $1 ucarp
# , ,
/sbin/ip address add "$2"/24 dev "$1"
# ip-
# /var/log/messages clusterip
iptables -t mangle -I PREROUTING -m state --state INVALID -j DROP
# , *.addr.external
for ADDRFILE in $(ls $ROOT/*.addr.external)
do
DEV=$(basename "$ADDRFILE" | awk -F "." '{print $1}')
for ADDR in $(cat $ROOT/$DEV.addr.external | grep -v ^#)
do
# ip-
/sbin/ip addr add $ADDR/24 dev $DEV
# ip-
iptables -t mangle -A PREROUTING -d "$ADDR" -i "$DEV" -j CLUSTERIP --new --hashmode sourceip --clustermac 01:00:5E:00:01:01 --total-nodes 1 --local-node 1 --hash-init 0
done
done
# arp- 2 , MAC-
arping -A -c 2 -I "$1" "$2"
downscript
In essence, we delete all the cluster ip-addresses and clean the table mangle
#!/bin/sh
ROOT="/etc/fw"
/etc/conntrackd/primary-backup.sh backup
/sbin/ip address del "$2"/24 dev "$1"
for ADDRFILE in $(ls $ROOT/*.addr.*)
do
DEV=$(basename "$ADDRFILE" | awk -F "." '{print $1}')
for ADDR in $(cat $ADDRFILE | grep -v ^#)
do
/sbin/ip addr del $ADDR/24 dev $DEV
done
done
iptables -t mangle -F
$ROOT/$(hostname -s).fw start
As a result of the above steps, on both servers you start the ucarp service.
One of the servers will become active and start upscript, which in turn should pick up all the cluster ip-addresses on the internal and external networks, roll up a set of rules for iptables.