📜 ⬆️ ⬇️

TCP / IP proxy on Go

I again returned to my favorite task for mastering new languages . After writing a blog engine for Go , I wanted to stretch my fingers again, the disease TCP / IP proxy / debugger is now written in Go.

In short, TCP / IP proxy is a program that can accept connections and “forward” them to the specified address. Along the way, logs are transmitted data. This is very useful when debugging various self-made network protocols.

In terms of functionality, the Go version, like Erlang , has three logs: a bi-directional hex dump and binary logs in both directions, “from” and “to” the remote host. Python version of binary logs does not lead.
')
Of course, everything is multithreaded. And since in Go, parallel programming is so simple (and safe), the number of parallel activities for each connection is even greater than in the Erlang version.

The following four streams worked on the Erlang for each connection:

In the version on Go, it is a little different:

Total 5.

In both cases, the reading threads log the data by sending messages to the logger threads. Of course, there are no stupid things like mutexes or conditional variables. Matching problems are elegantly solved through the Go channels.

Below is the source. It differs from that in the repository by the presence of abundant comments. For people who are not very familiar with Go, some moments may be interesting.

package main import ( "flag" "fmt" "net" "os" "strings" "time" "encoding/hex" "runtime" ) var ( host *string = flag.String("host", "", "target host or address") port *string = flag.String("port", "0", "target port") listen_port *string = flag.String("listen_port", "0", "listen port") ) func die(format string, v ...interface{}) { os.Stderr.WriteString(fmt.Sprintf(format+"\n", v...)) os.Exit(1) } //       . func connection_logger(data chan []byte, conn_n int, local_info, remote_info string) { log_name := fmt.Sprintf("log-%s-%04d-%s-%s.log", format_time(time.Now()), conn_n, local_info, remote_info) logger_loop(data, log_name) } //     . func binary_logger(data chan []byte, conn_n int, peer string) { log_name := fmt.Sprintf("log-binary-%s-%04d-%s.log", format_time(time.Now()), conn_n, peer) logger_loop(data, log_name) } //     .  -   //  .   -      //  .     - . // func logger_loop(data chan []byte, log_name string) { f, err := os.Create(log_name) if err != nil { die("Unable to create file %s, %v\n", log_name, err) } defer f.Close() //      . for { b := <-data if len(b) == 0 { break } f.Write(b) f.Sync() //    flush'. } } func format_time(t time.Time) string { return t.Format("2006.01.02-15.04.05") } func printable_addr(a net.Addr) string { return strings.Replace(a.String(), ":", "-", -1) } // ,     . ,  //    . type Channel struct { from, to net.Conn logger, binary_logger chan []byte ack chan bool } // , ""         . //   . func pass_through(c *Channel) { from_peer := printable_addr(c.from.LocalAddr()) to_peer := printable_addr(c.to.LocalAddr()) b := make([]byte, 10240) offset := 0 packet_n := 0 for { n, err := c.from.Read(b) if err != nil { c.logger <- []byte(fmt.Sprintf("Disconnected from %s\n", from_peer)) break } if n > 0 { //  - ,      . c.logger <- []byte(fmt.Sprintf("Received (#%d, %08X) %d bytes from %s\n", packet_n, offset, n, from_peer)) //  ,      hex-. ,   ? c.logger <- []byte(hex.Dump(b[:n])) c.binary_logger <- b[:n] c.to.Write(b[:n]) c.logger <- []byte(fmt.Sprintf("Sent (#%d) to %s\n", packet_n, to_peer)) offset += n packet_n += 1 } } c.from.Close() c.to.Close() c.ack <- true //     ,   . } //    .      //  . func process_connection(local net.Conn, conn_n int, target string) { //    ,    . remote, err := net.Dial("tcp", target) if err != nil { fmt.Printf("Unable to connect to %s, %v\n", target, err) } local_info := printable_addr(remote.LocalAddr()) remote_info := printable_addr(remote.RemoteAddr()) //   . started := time.Now() //      . logger := make(chan []byte) from_logger := make(chan []byte) to_logger := make(chan []byte) //       . ack := make(chan bool) //  . go connection_logger(logger, conn_n, local_info, remote_info) go binary_logger(from_logger, conn_n, local_info) go binary_logger(to_logger, conn_n, remote_info) logger <- []byte(fmt.Sprintf("Connected to %s at %s\n", target, format_time(started))) //   . go pass_through(&Channel{remote, local, logger, to_logger, ack}) go pass_through(&Channel{local, remote, logger, from_logger, ack}) //     . <-ack <-ack //   . finished := time.Now() duration := finished.Sub(started) logger <- []byte(fmt.Sprintf("Finished at %s, duration %s\n", format_time(started), duration.String())) //    .       // ,         ,    //   . logger <- []byte{} from_logger <- []byte{} to_logger <- []byte{} } func main() { //  Go      . runtime.GOMAXPROCS(runtime.NumCPU()) //    (,   ?) flag.Parse() if flag.NFlag() != 3 { fmt.Printf("usage: gotcpspy -host target_host -port target_port -listen_post=local_port\n") flag.PrintDefaults() os.Exit(1) } target := net.JoinHostPort(*host, *port) fmt.Printf("Start listening on port %s and forwarding data to %s\n", *listen_port, target) ln, err := net.Listen("tcp", ":"+*listen_port) if err != nil { fmt.Printf("Unable to start listener, %v\n", err) os.Exit(1) } conn_n := 1 for { //   . if conn, err := ln.Accept(); err == nil { //    . go process_connection(conn, conn_n, target) conn_n += 1 } else { fmt.Printf("Accept failed, %v\n", err) } } } 

Again, each connection is served by five threads. And I did it not for fun. It just seemed to me that logically there are clearly independent subtasks that it would be logical to run in parallel. If I wrote everything in C ++ / boost, I’d most likely fouled everything up with one thread for each connection (and maybe the whole program would be purely single-threaded through some sophisticated multiplexing libraries), and it’s possible that in C ++ also would work faster, despite one flow. But I do not want to say this. Go pushes for multi-threaded programming (and does not repel, like C ++, even on steroids of the new standard). Anyway, there will be tasks where convenient multithreading will become a key factor.

You can start this way (at least Go release 1 is required):

 go run gotcpspy.go -host pop.yandex.ru -port 110 -local_port 8080 

Output:

 Start listening on port 8080 and forwarding data to pop.yandex.ru:110 

Then, if you enter in another window:

 telnet localhost 8080 

and enter, for example, " USER test ", " ENTER " and " PASS none ", " ENTER ", three logs will be created (the date in the name will, of course, be different).

The general log of log-2012.04.20-19.55.17-0001-192.168.1.41-49544-213.180.204.37-110.log :

  Connected to pop.yandex.ru:110 at 2012.04.20-19.55.17 Received (#0, 00000000) 38 bytes from 192.168.1.41-49544 00000000 2b 4f 4b 20 50 4f 50 20 59 61 21 20 76 31 2e 30 |+OK POP Ya! v1.0| 00000010 2e 30 6e 61 40 32 36 20 48 74 6a 4a 69 74 63 50 |.0na@26 HtjJitcP| 00000020 52 75 51 31 0d 0a |RuQ1..| Sent (#0) to [--1]-8080 Received (#0, 00000000) 11 bytes from [--1]-8080 00000000 55 53 45 52 20 74 65 73 74 0d 0a |USER test..| Sent (#0) to 192.168.1.41-49544 Received (#1, 00000026) 23 bytes from 192.168.1.41-49544 00000000 2b 4f 4b 20 70 61 73 73 77 6f 72 64 2c 20 70 6c |+OK password, pl| 00000010 65 61 73 65 2e 0d 0a |ease...| Sent (#1) to [--1]-8080 Received (#1, 0000000B) 11 bytes from [--1]-8080 00000000 50 41 53 53 20 6e 6f 6e 65 0d 0a |PASS none..| Sent (#1) to 192.168.1.41-49544 Received (#2, 0000003D) 72 bytes from 192.168.1.41-49544 00000000 2d 45 52 52 20 5b 41 55 54 48 5d 20 6c 6f 67 69 |-ERR [AUTH] logi| 00000010 6e 20 66 61 69 6c 75 72 65 20 6f 72 20 50 4f 50 |n failure or POP| 00000020 33 20 64 69 73 61 62 6c 65 64 2c 20 74 72 79 20 |3 disabled, try | 00000030 6c 61 74 65 72 2e 20 73 63 3d 48 74 6a 4a 69 74 |later. sc=HtjJit| 00000040 63 50 52 75 51 31 0d 0a |cPRuQ1..| Sent (#2) to [--1]-8080 Disconnected from 192.168.1.41-49544 Disconnected from [--1]-8080 Finished at 2012.04.20-19.55.17, duration 5.253979s 

The binary log of outgoing data log-binary-2012.04.20-19.55.17-0001-192.168.1.41-49544.log :

  USER test PASS none 

The binary log of the incoming data log-binary-2012.04.20-19.55.17-0001-213.180.204.37-110.log :

  +OK POP Ya! v1.0.0na@26 HtjJitcPRuQ1 +OK password, please. -ERR [AUTH] login failure or POP3 disabled, try later. sc=HtjJitcPRuQ1 

Now measure the performance. Bleed the file directly, and then through this program.

Download directly (file size is about 72MB):

  time wget http://www.erlang.org/download/otp_src_R15B01.tar.gz ... Saving to: `otp_src_R15B01.tar.gz' ... real 1m2.819s 

Now we download through the program, after running it:

  go run gotcpspy.go -host=www.erlang.org -port=80 -listen_port=8080 

Downloading:

  time wget http://localhost:8080/download/otp_src_R15B01.tar.gz ... Saving to: `otp_src_R15B01.tar.gz.1' ... real 0m56.209s 

Just in case, you can compare the results:

  diff otp_src_R15B01.tar.gz otp_src_R15B01.tar.gz.1

I have the same files, then everything works correctly.

Now is the time. I repeated the experiment several times (on Mac Air), and, surprisingly, downloading through the program was always not that slower, but even a little faster. For example, directly - 1m2.819s, through the program - 0m56.209s. The only explanation is that wget probably works in one thread, and the program accepts data from a local and remote socket in two threads, which can give a slight acceleration. But, the difference is still minimal, and perhaps on another machine or network it will not be visible, but the main thing is that it works at least not more slowly than directly, despite the creation of very massive logs during the transfer.

So, so far among the three options for such a program, on Piton, Erlang and Go, the last one I like the most.

It seemed to me a good experiment with parallelism in Go.

Posts on the topic



Repository links



PS


By the way, if one of the javistes would have muddied a similar program (if possible, which does not require Eclipse / IDEA / ant / maven / spring / log4j / ivy, etc.) to build, it would be very interesting to compare. And not in terms of efficiency and speed, but in terms of beauty, elegance.

Source: https://habr.com/ru/post/142527/


All Articles