Part one. IntroductoryPart two. Configuring Firewall and NAT RulesPart Three DHCP setupPart Four Routing SetupLast time we talked about the capabilities of the NSX Edge in terms of static and dynamic routing, and today we will deal with the balancer.
')
Before setting up, I would like to briefly recall the main types of balancing.
Theory
All of today's solutions for payload balancing are often divided into two categories: balancing at the fourth (transport) and seventh (applied) levels of the
OSI model . The OSI model is not the best reference point when describing balancing methods. For example, if the L4 balancer also supports TLS termination, does it become an L7 balancer in this case? But what is, that is.
- The L4 balancer is most often the middle proxy between the client and the set of available backends, which terminates TCP connections (that is, responds independently to SYN), selects the backend and initiates a new TCP session in its direction, sending SYN independently. This type is one of the basic ones, other variants are possible.
- The L7 balancer distributes traffic to the available backends “more sophisticated” than the L4 balancer does. It may decide to choose a backend based on, for example, the content of the HTTP message (URL, cookie, etc.).
Regardless of the type, the balancer can support the following functions:
- Service discovery is the process of determining the set of available backends (Static, DNS, Consul, Etcd, etc.).
- Checking the performance of the detected backends (active pinging of the backend using an HTTP request, passive detection of problems in TCP connections, the presence of several 503 HTTP-code in the responses, etc.).
- Balancing itself (round robin, random selection, source IP hash, URI).
- TLS termination and certificate verification.
- Security-related options (authentication, prevention of DoS attacks, speed limit) and more.
NSX Edge offers support for two balancer deployment modes:
Proxy mode, or one-arm . In this mode, NSX Edge uses its IP address as the source address when sending a request to one of the backends. Thus, the balancer simultaneously performs the functions Source and Destination NAT. The backend sees all traffic as sent from the balancer and responds directly to it. In such a scheme, the balancer must be in the same network segment with internal servers.
Here’s how it happens:
- The user sends a request to the VIP address (balancer address), which is configured on the Edge.
- Edge selects one of the backends and performs destination NAT, replacing the VIP address with the address of the selected backend.
- Edge performs source NAT, replacing the address of the sending user with its own.
- The packet is sent to the selected backend.
- The backend does not respond directly to the user, but to Edge, since the user's original address was changed to the balancer address.
- Edge sends the server response to the user.
The diagram below.
Transparent, or inline, mode. In this scenario, the balancer has interfaces in the internal and external networks. At the same time to the internal network there is no direct access from the external. The built-in load balancer acts as a NAT gateway for virtual machines on the internal network.
The mechanism is as follows:
- The user sends a request to the VIP address (balancer address), which is configured on the Edge.
- Edge selects one of the backends and performs destination NAT, replacing the VIP address with the address of the selected backend.
- The packet is sent to the selected backend.
- The backend receives a request with the user's initial address (source NAT was not executed) and responds directly to it.
- Traffic is again received by the load balancer, since in the inline scheme it usually acts as the default gateway for the server farm.
- Edge performs source NAT to send traffic to the user, using its VIP as the source IP address.
The diagram below.

Practice
On my test bench, I configured 3 servers with Apache, which is configured to work on HTTPS. Edge will perform round-robin HTTPS requests balancing, proxying each new request to a new server.
Let's get started
We generate an SSL certificate that NSX Edge will use
You can import a valid CA certificate or use a self-signed one. In this test I will use self-signed.
- In the vCloud Director interface, go to the Edge services settings.

- Go to the Certificates tab. From the list of actions, choose to add a new CSR.

- Fill in the required fields and click Keep.

- Select the newly created CSR and select the self-sign CSR option.

- Select the validity period of the certificate and click Keep

- The self-signed certificate appeared in the list of available.

Customize Application Profile
Application profiles give you more control over network traffic and make managing it simple and effective. With their help, you can determine the behavior for specific types of traffic.
- Go to the Load Balancer tab and turn on the balancer. The Acceleration enabled option here allows the balancer to use faster L4 balancing instead of L7.

- Go to the Application profile tab to set the application profile. Click +.

- Specify the profile name and select the type of traffic for which the profile will be applied. I will explain some parameters.
Persistence - saves and tracks session data, for example: which particular server from the pool serves a user request. This ensures that user requests are sent to the same member of the pool throughout the life of the session or subsequent sessions.
Enable SSL passthrough — if you select this option, NSX Edge stops terminating SSL. Instead, termination occurs directly on the servers for which balancing is performed.
Insert X-Forwarded-For HTTP header - allows you to determine the source IP address of the client connecting to the web server through the balancer.
Enable Pool Side SSL - allows you to specify that the selected pool consists of HTTPS servers.

- Since I will be balancing HTTPS traffic, I need to enable Pool Side SSL and select the previously generated certificate in the Virtual Server Certificates tab -> Service Certificate.

- Similarly for Pool Certificates -> Service Certificate.

Create a pool of servers, the traffic to which Pools will be balanced
- Go to the Pools tab. Click +.

- Set the pool name, select the algorithm (I will use round robin) and the type of monitoring for health check backend. The Transparent option indicates whether the original source IP clients are visible to the internal servers.
- If the option is disabled, traffic for internal servers comes with a balancer’s IP source.
- If this option is enabled, internal servers see the source IP of clients. In this configuration, the NSX Edge should act as the default gateway to ensure that the returned packets pass through the NSX Edge.
NSX supports the following balancing algorithms:
- IP_HASH — server selection based on the results of a hash function for the source and destination IP of each packet.
- LEASTCONN - balancing incoming connections, depending on the number of existing connections on a particular server. New connections will be directed to the server with the least number of connections.
- ROUND_ROBIN - new connections are sent to each server in turn, in accordance with the weight given to it.
- URI - the left side of the URI (before the question mark) is hashed and divided by the total weight of the servers in the pool. The result indicates which server receives the request, ensuring that the request is always sent to the same server, as long as all servers remain available.
- HTTPHEADER — Balancing based on a specific HTTP header that can be specified as a parameter. If the header is missing or does not have any meaning, the ROUND_ROBIN algorithm is applied.
- URL — Each HTTP GET request is searched by the URL parameter specified as an argument. If the parameter is followed by an equal sign and a value, then the value is hashed and divided by the total weight of the running servers. The result indicates which server receives the request. This process is used to track user IDs in requests and ensure that the same user id is always sent to the same server, as long as all servers are available.

- In the Members block, click + to add servers to the pool.

Here you need to specify:
- server name;
- Server IP address;
- port on which the server will receive traffic;
- port for health check (Monitor healthcheck);
- Weight - with this parameter you can adjust the proportional amount of traffic received for a specific member of the pool;
- Max Connections - the maximum number of connections to the server;
- Min Connections - the minimum number of connections that the server must process before traffic is redirected to the next member of the pool.

Here is the final pool of three servers.

Add Virtual Server
- Go to the Virtual Servers tab. Click +.

- We activate the virtual server using Enable Virtual Server.
We give it a name, select the Application Profile, Pool created earlier, and specify the IP address to which Virtual Server will accept requests from outside. Specify the HTTPS protocol and port 443.
Optional parameters here:
Connection Limit - the maximum number of simultaneous connections that a virtual server can handle;
Connection Rate Limit (CPS) - the maximum number of new incoming requests per second.

This completes the balancer configuration, you can check its performance. Servers have the simplest configuration, which allows to understand which server from the pool processed the request. During setup, we chose the Round Robin balancing algorithm, and the Weight parameter for each server is one, so each next request will be processed by the next server from the pool.
Enter the external address of the balancer in the browser and see:

After updating the page, the request will be processed by the following server:

And once again - to check the third server from the pool:

When checking, you can see that the certificate that Edge sends us is the same one that we generated at the very beginning.
Check balancer status from Edge gateway console. To do this, enter
show service loadbalancer pool .

Configure Service Monitor to check the status of servers in the pool
Using Service Monitor, we can monitor the status of servers in the backend pool. If the response to the request does not match the expected one, the server can be brought out of the pool so that it does not receive any new requests.
By default, three validation methods are configured:
- TCP monitor,
- HTTP monitor,
- HTTPS-monitor.
Create a new one.
- Go to the Service Monitoring tab, click +.

- Choose:
- name for the new method;
- the interval at which requests will be sent,
- response timeout,
- monitoring type is HTTPS request using the GET method, the expected status code is 200 (OK) and the request URL.
- This completes the setting of the new Service Monitor, now we can use it when creating a pool.

Customize Application Rules
Application Rules is a method of manipulating traffic based on certain triggers. With this tool, we can create advanced load balancing rules that may not be configured through the Application profiles or using other services available on the Edge Gateway.
- To create a rule, go to the Application Rules tab of the balancer.

- Select the name, the script that will use the rule, and click Keep.

- After the rule is created, we need to edit the already configured Virtual Server.

- In the Advanced tab, we add the rule we created.

In the example above, we included support for tlsv1.
A couple more examples:
Redirect traffic to another pool.
Using this script, we can redirect traffic to another balancing pool if the main pool is not working. For the rule to work, several pools must be configured on the balancer and all members of the main pool must be able to down. It is necessary to specify the name of the pool, not its ID.
acl pool_down nbsrv(PRIMARY_POOL_NAME) eq 0 use_backend SECONDARY_POOL_NAME if PRIMARY_POOL_NAME
Redirect traffic to an external resource.
Here we redirect traffic to an external website if all members of the main pool are down.
acl pool_down nbsrv(NAME_OF_POOL) eq 0 redirect location http://www.example.com if pool_down
More examples
here .
I have everything on this balancer. If you have questions, ask, I am ready to answer.