📜 ⬆️ ⬇️

FreeBSD Netgraph, consider traffic

Continuing the topic of the nuclear graph subsystem Netgraph FreeBSD Netgraph using the example of an Ethernet tunnel, we will try to calculate traffic using the Cisco netflow protocol.

In the last review, we met ng_bridge, ng_ether and ng_ksocket modules and built an Ethernet tunnel over the Internet based on them, and today I will tell you how to use the additional netgraph modules to calculate the traffic passing through this tunnel.

We will use the ng_netflow module for traffic accounting.

Wikipedia tells
')
Netflow is a protocol developed by Cisco and designed to collect information about IP traffic within a network.
Cisco routers generate a netflow stream that is sent to a special node known as the netflow collector.

Our ng_netflow is prefixed by the cisco router and will deliver the real cisco netflow to the collector. The collector collects information, groups traffic by streams, IP addresses, draws graphics, etc. Depends on the implementation. I used the trial netflow analyzer 7.5.

Again about the cubes


We will need:

ng_netflow.gif
ng_netflow - Netgraph kernel subsystem module implementing cisco netflow protocol version 5. Ng_netflow accepts incoming traffic, identifies it, and creates counters of active traffic flows. Streams are sorted by protocols, port numbers, ToS, interfaces. Completed streams are sent as UDP datagrams to the Netflow collector. The stream is considered complete: if the RST or TCP FIN packet was received. Also for streams, there are timeouts on which the stream will be terminated and transmitted to the collector. The active stream timeout is 1800 seconds by default. And the timeout of the non-active thread is 15 seconds by default.

Hooks Ng_netflow have names iface0, iface1, iface2, ifaceN. Also corresponding to them are out0, out1, out2, outN. And export export export statistics.

Incoming traffic to ifaceN is processed by the accounting module. If the corresponding outN hook is connected, the traffic goes unchanged to it; if it is not connected, it does not go anywhere. The traffic included in the outN hook without changes passes to the ifaceN hook, without processing by the accounting module. Ie in fact, only incoming traffic to ifaceN hits the counters. To control the traffic accounting behavior, there are settings that will be described below. As a result, UDP netflow datagrams will come out via the export hook, this kitchen is usually connected to the inet / dgram / udp hook of the ng_ksocket module.

The control messages received by the module are the same commands: info, ifinfo, setdlt, setifindex, settimeouts, setconfig, show. I will describe some of the rest read in man ng_netflow.

Setdlt sets the type of interface connected to ifaceN. Of all the possible options (/usr/src/sys/net/bpf.h)

/*
* Data-link level type codes.
*/
#define DLT_NULL 0 /* BSD loopback encapsulation */
#define DLT_EN10MB 1 /* Ethernet (10Mb) */
#define DLT_EN3MB 2 /* Experimental Ethernet (3Mb) */
#define DLT_AX25 3 /* Amateur Radio AX.25 */
#define DLT_PRONET 4 /* Proteon ProNET Token Ring */
#define DLT_CHAOS 5 /* Chaos */
#define DLT_IEEE802 6 /* IEEE 802 Networks */
#define DLT_ARCNET 7 /* ARCNET */
#define DLT_SLIP 8 /* Serial Line IP */
#define DLT_PPP 9 /* Point-to-point Protocol */
#define DLT_FDDI 10 /* FDDI */
#define DLT_ATM_RFC1483 11 /* LLC/SNAP encapsulated atm */
#define DLT_RAW 12 /* raw IP */

only Ethernet and bare IP are supported, respectively options 1 and 12. The first option is set by default. Syntax "setdlt {iface = 0 dlt = 12}"

Settimeouts - sets timeouts for active and non-active threads, after which statistics will be sent to the collector. Syntax "settimeouts {inactive = 15 active = 1800}"


ng_hub.gif
ng_hub - the name comes from network terminology. Ethernet hubs have not been used anywhere for a long time, and unlike modern smart Ethernet switches, they were able to do only two simple actions: accept a packet on any interface, and send this packet to all interfaces.
This module works exactly the same. Receives data on any connected hook with an arbitrary name, and sends this data unchanged to all connected hooks. Control messages do not accept.


We will not pass traffic through ng_netgraph through, using the outN hook, but use the ng_hub module to copy traffic passing through the tunnel in both directions.

We make the graph.


ethernet_over_udp_netflow_scheme.gif


Compared to the old scheme in the new changes are visible:

1. A new module ng_hub is inserted into the gap between the link2 of the ng_bridge module and the inet / dgram / udp of the ng_ksocket module.
2. A new ng_netflow is connected to ng_hub.
3. ng_netflow is connected to a new copy of the ng_ksocket module that is connected to the netflow collector.

We collect the graph in the system.


On the server side bsd2, no changes will be necessary.
On the server bsd1 we will collect everything from the beginning.

Create a node ng_bridge and connect to its hook “link0” hook of the network interface “em1” “lower”.
ngctl mkpeer em1: bridge lower link0
We call the newly created node the name "switch", it can be found along the path "em1: lower".
ngctl name em1: lower switch
We connect to the "link1" of our "switch" upper network interface "em1".
ngctl connect switch: em1: link1 upper
Create the ng_hub node and connect it to the hook of “hublink0” hook of “link2” of our “switch”
ngctl mkpeer switch: hub link2 hublink0
Call the newly created node the name "hub", it can be found along the path "switch: link2"
ngctl name switch: link2 hub
Create a node ng_ksocket and connect to its hook “inet / dgram / udp” “hublink1” of our “hub”
ngctl mkpeer hub: ksocket hublink1 inet / dgram / udp
We call the newly created node the name “hub_socket”, it can be found along the path “hub: hublink1”
ngctl name hub: hublink1 hub_socket
We create the ng_netflow node and connect to its hook “iface0” “hublink2” of our “hub”
ngctl mkpeer hub: netflow hublink2 iface0
We call the newly created node the name "netflow", it can be found along the path "hub: hublink2"
ngctl name hub: hublink2 netflow
Create another ng_ksocket node and connect to its inet / dgram / udp "export" created "netflow"
ngctl mkpeer netflow: ksocket export inet / dgram / udp
We call the newly created node the name "netflow_socket", it can be found along the path "netflow: export"
ngctl name netflow: export netflow_socket
We send the command "bind" to our "switch_socket", with parameters. ksocket will take port 7777 on IP 1.1.1.1.
ngctl msg switch_socket: bind inet / 1.1.1.1: 7777
We send the "connect" command to our "switch_socket", with parameters. ksocket will connect to port 7777 at the IP address 2.2.2.2.
ngctl msg switch_socket: connect inet / 2.2.2.2: 7777
We send the "connect" command to our "netflow_socket", with parameters. ksocket will connect to port 9996 at the IP address 3.3.3.3. There should live netflow collector.
ngctl msg netflow_socket: connect inet / 3.3.3.3: 9996
We send a command to the ng_ether module of the em1 network interface to switch to the listening mode for packets not addressed to it. We now need to accept packets for devices located in our virtual network.
ngctl msg em1: setpromisc 1
ngctl msg em1: setautosrc 0

The final graph build script:
#!/bin/sh
self=1.1.1.1
peer=2.2.2.2
collector=3.3.3.3:9996
port=7777
if=em1

case "$1" in
start)
echo "Starting netgraph switch."
ngctl mkpeer ${if}: bridge lower link0
ngctl name ${if}:lower switch
ngctl connect switch: ${if}: link1 upper
ngctl mkpeer switch: hub link2 hublink0
ngctl name switch:link2 hub
ngctl mkpeer hub: ksocket hublink1 inet/dgram/udp
ngctl name hub:hublink1 hub_socket
ngctl mkpeer hub: netflow hublink2 iface0
ngctl name hub:hublink2 netflow
ngctl mkpeer netflow: ksocket export inet/dgram/udp
ngctl name netflow:export netflow_socket
ngctl msg hub_socket: bind inet/${self}:${port}
ngctl msg hub_socket: connect inet/${peer}:${port}
ngctl msg netflow_socket: connect inet/${collector}
ngctl msg ${if}: setpromisc 1
ngctl msg ${if}: setautosrc 0
echo "Ok."
exit 0
;;
stop)
echo "Stopping netgraph switch."
ngctl shutdown netflow_socket:
ngctl shutdown netflow:
ngctl shutdown hub_socket:
ngctl shutdown hub:
ngctl shutdown switch:
ngctl shutdown ${if}:
echo "Ok."
exit 0
;;
restart)
sh $0 stop
sh $0 start
;;
*)
echo "Usage: `basename $0` { start | stop | restart }"
exit 64
;;
esac


See the result:
There are 8 total nodes:

[root@bsd1] /root/> ngctl list
Name: em0 Type: ether ID: 00000001 Num hooks: 0
Name: em1 Type: ether ID: 00000002 Num hooks: 2
Name: switch Type: bridge ID: 000002c7 Num hooks: 3
Name: ngctl56729 Type: socket ID: 000002e1 Num hooks: 0
Name: hub_socket Type: ksocket ID: 000002ce Num hooks: 1
Name: hub Type: hub ID: 000002cb Num hooks: 3
Name: netflow_socket Type: ksocket ID: 000002d4 Num hooks: 1
Name: netflow Type: netflow ID: 000002d1 Num hooks: 2


Chasing a little traffic through the tunnel look:
[root@bsd1] /root/> ngctl msg netflow: info
Rec'd response "info" (805306369) from "[2d1]:":
Args: { Bytes=1722435 Packets=13683 Records used=27 Active expiries=203 Inactive expiries=5566 Inactive timeout=15 Active timeout=1800 }


Just looking at the statistics of the collector, you can see the traffic passing through the tunnel.

netflow_analyzer.png

the end


The practical use of traffic accounting in a tunnel is doubtful, so the collection of such a graph can be considered as an example for further understanding the operation of the subsystem and upgrading ready-made schemes to fit your needs.

In the following articles I will describe the interaction of the netgraph subsystem with ipfw, with a more practical way of accounting for traffic through ng_netflow.

Until new meetings.

Source: https://habr.com/ru/post/87407/


All Articles