📜 ⬆️ ⬇️

Merge Proxmox nodes into a cluster using OpenVPN

Using the Proxmox virtualization environment, namely OpenVZ containers, to create a virtual hosting service will not be news for anyone. The server leased at the Hetzner site has successfully coped with its responsibilities for quite a long time.

But as time went on, the amount of data increased, clients multiplied, LA grew ... A new server was rented, Proxmox was installed and configured, the administrator rushes to configure the cluster to migrate the loaded containers to the new server. In google found deposits of instructions, and on the wiki Proxmox project itself has the necessary information.

Servers are in different subnets. Proxmox uses corosync to synchronize the settings of cluster nodes. When adding a node to the cluster - error:

  Waiting for quorum ... Timed-out waiting for cluster [FAILED] 

Admin in a panic

')

Task:


Configure synchronization of Proxmox nodes located in any data center and having an external IP address. Organize a "cluster" in the understanding of Proxmox.

Given:


So, what we have is:


We find out that synchronization does not work due to the fact that multicast requests, although sent, are cut by the equipment. Nodes just do not see each other. Also try to use the IP addresses of the available network interfaces for synchronization. Those. or external IP, or IP subnet for VM.

Decision:


We will make multicast requests sent by corosync to go inside the same network for all nodes of the cluster. We will raise our private subnet with OpenVPN and routing.

0. Cleansing

First you need to roll back all the changes made by an unsuccessful attempt to add a node to the cluster. It is assumed that nothing has been configured on "node2" yet, and there was no VM.


1. Network settings within the cluster

For some unification of settings, we will coordinate the following parameters for networks within our future cluster:


2. Set up the “master” node

2.1 OpenVPN

I will not go into much of the OpenVPN configuration, since articles written a lot. Including on Habré . I will describe only the main features and settings:

  1. Install:

     apt-get install openvpn 

  2. Create a file with the settings /etc/openvpn/node1.conf and allow it to run in / etc / default / openvpn

  3. In the settings file you need to enter the following parameters:

     #     tap dev tap proto udp #   UDP  sndbuf 393216 rcvbuf 393216 #   server 10.0.0.0 255.255.255.0 #  -     # corosync     vmbr0 route 224.0.0.0 240.0.0.0 10.1.0.1 #       VPN route 10.2.0.0 255.255.255.0 10.0.0.2 route 10.3.0.0 255.255.255.0 10.0.0.3 #      ... #   - client-config-dir clients client-to-client 

  4. In the / etc / openvpn / clients directory we create files for the settings of the client nodes:

     /etc/openvpn/clients/node2: #   1 —   push "route 10.1.0.0 255.255.0.0" # , ,   3 —   # push "route 10.3.0.0 255.255.0.0" # multicast —  VPN  master- push "route 224.0.0.0 240.0.0.0" push "dhcp-option DNS 10.0.0.1" push "dhcp-option DOMAIN hosting.lan" push "sndbuf 393216" push "rcvbuf 393216" #  tap- — IP + NetMask ifconfig-push 10.0.0.2 255.255.0.0 

  5. Run vpn:

     service openvpn restart 

  6. Go to the node “node2”, also install openvpn, set the file “master” in / etc / default / openvpn.

    You will also need to install the resolvconf package. Unlike the master. Otherwise, magic with domains for the internal network may not work. I also had to copy the original to tail file inside the /etc/resolvconf/resolv.conf.d/ directory. Otherwise, name servers from hetzher were lost.

    Depending on the server settings, we create a settings file for the client, which should include the following parameters:

     /etc/openvpn/master.conf: client dev tap proto udp remote < IP   master> 

  7. Run vpn:

     service openvpn restart 


2.2 Host and service settings for the cluster

  1. At each node, you need to edit the / etc / hosts file and bring it to the following form:
    # IPv4
    127.0.0.1 localhost.localdomain localhost
    # external address and domain host
    144.76.ab node1.example.com
    #
    # IPv6
    :: 1 ip6-localhost ip6-loopback
    fe00 :: 0 ip6-localnet
    ff00 :: 0 ip6-mcastprefix
    ff02 :: 1 ip6-allnodes
    ff02 :: 2 ip6-allrouters
    ff02 :: 3 ip6-allhosts

    xxxx: xxx: xxx: xxxx :: 2 ipv6.node1.example.com ipv6.node1

    #
    # VPN

    10.0.0.1 node1 master cluster
    10.0.0.2 node2
    # and so for each new node ...

    Specifying separately the IP addresses from the VPN subnet for nodes, we force their use, because Proxmox services use short domain names.

  2. On the "master" edit the file /etc/pve/cluster.conf, add the line multicast:

     <cman keyfile="/var/lib/pve-cluster/corosync.authkey"> <multicast addr="224.0.2.1"/> </cman> 

    If the file cannot be saved, then try to restart the service:

     cd /etc service pve-cluster restart 

    and try again to edit.
    After editing:

     cd /etc service pve-cluster restart service cman restart 

  3. Check the status of "master":

     pvecm status 

    As a result, the following should be seen:
    ...
    Node ID: 1
    Multicast addresses: 224.0.2.1
    Node addresses: 10.0.0.1

3. Add a node to the cluster

These settings should already be enough for the cluster to work. Add a node to the cluster according to the instructions from the wiki:

  1. Go to the node "node2"
  2. Enter:

     pvecm add master 

    We answer questions, we wait. We see that the quorum is reached.

     pvecm status 

    ...
    Node ID: 2
    Multicast addresses: 224.0.2.1
    Node addresses: 10.0.0.2

Result


Positive



Negative

Alas, not everything is as good as we would like.



Links


habrahabr.ru/post/233971 - Installation and configuration guide for OpenVPN
pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster
pve.proxmox.com/wiki/Multicast_notes
www.nedproductions.biz/wiki/configuring-a-proxmox-ve-2.x-cluster-running-over-an-openvpn-intranet

Source: https://habr.com/ru/post/251541/


All Articles