Earlier, I introduced a couple of small
posts about the potential role of
Spring Boot 2 in reactive programming. After that, I received a series of questions about how asynchronous operations work in programming as a whole. Today I want to make out what
Non-blocking I / O is and how to apply this knowledge to create a small
tcp server on
python that can handle many open and heavy (long) connections into one stream. Knowledge of
python is not required: everything will be extremely simple with a lot of comments. I invite everyone!
I, like many other developers, really like experiments, so the entire subsequent article will consist of just a series of experiments and the conclusions that they carry. It is assumed that you are not familiar enough with the subject, and will be willing to experiment with me. Sample sources can be found on
github .
Let's start by writing a very simple
tcp server. The server task will be to get and print data from the socket and return the
Hello from server string
! . It looks like this:
Everything is pretty simple here. If you are not familiar with the concept of a socket, then
this is a very simple and practical article. We create a socket, catch incoming connections and process them according to a given logic. Here it is worth paying attention to the messages. When creating a new connection with the client, we write about this in the console.
')
I want to immediately note that you should not seriously delve into the listings of programs until the full reading of the article. It is perfectly normal if something is not entirely clear at the very beginning. Just keep reading.
There is not much point in a server without a client. Therefore, the next step is to write a small client to use the server:
An important feature here is the fact that we initially establish the maximum possible number of connections, and only then use them to transfer / store data.
Let's start the server. The first thing we see:
server is running, please, press ctrl+c to stop
This means that we have successfully launched our server and it is ready to accept incoming requests. Start the client and immediately see in the server console (your ports may be different):
server is running, please, press ctrl+c to stop new connection from ('127.0.0.1', 39196) b'hello from client number 0' new connection from ('127.0.0.1', 39198) b'hello from client number 1' ...
Which was to be expected. In an infinite loop, we get a new connection and immediately process the data from it. What is the problem? Previously, we used the
server_socket.listen (10) option to configure the server. It means the maximum queue size of connections not yet received. But this makes absolutely no sense, because we take on one connection. What to do in this situation? In fact, there are several exits.
- To parallelize using threads / processes (for this, for example, you can use fork or pool). Read more here .
- Process requests not as they connect to the server, but as these connections are filled with the necessary amount of data. Simply put, we can immediately open the maximum amount of resources and read as many of them as we can (as much CPU time is needed for this in the ideal case).
The second idea seems tempting. Just one thread and handling multiple connections. Let's see how it will look. Do not be afraid of the abundance of code. If something is not immediately clear, then it is quite normal. You can try to run and podobazhit:
Asynchronous server import select import socket SERVER_ADDRESS = ('localhost', 8686)
Let's run our new server and look at the console:
Asynchronous Server Output server is running, please, press ctrl+c to stop new connection from ('127.0.0.1', 56608) new connection from ('127.0.0.1', 56610) new connection from ('127.0.0.1', 56612) new connection from ('127.0.0.1', 56614) new connection from ('127.0.0.1', 56616) new connection from ('127.0.0.1', 56618) new connection from ('127.0.0.1', 56620) new connection from ('127.0.0.1', 56622) new connection from ('127.0.0.1', 56624) getting data: b'hello from client number 0' new connection from ('127.0.0.1', 56626) getting data: b'hello from client number 1' getting data: b'hello from client number 2'
As you can see from the conclusion, we are accepting new connections and data almost in parallel. Moreover, we do not expect data from a new connection. Instead, we install a new one.
How it works?
The fact is that all our operations with resources (and access to the socket belongs to this category) occur through the system calls of the operating system. In short, system calls are a call to
the operating system
API .
Consider what happens in the first case and in the second.
Synchronous call
Let's take a picture:

The first arrow indicates that our application is accessing the operating system to retrieve data from a resource. Further, our program is blocked until the desired event. The downside is obvious: if we have one thread, then other users should wait for the processing of the current one.
Asynchronous call
Now let's look at a drawing that illustrates an asynchronous call:

The first arrow, as in the first case, makes a request to the OS (operating system) to receive data from resources. But look what happens next. We do not wait for data from the resource and continue to work. What should we do? We have ordered the OS and do not wait for the result immediately. The simplest answer would be to self-poll our system for data. Thus, we will be able to utilize resources and not block our flow.
But in fact, such a system is not practical. Such a state in which we constantly look at the data and wait for some event is called active waiting. The downside is obvious: we are wasting our CPU time in case the information has not been updated. A better solution would be to leave the lock, but make it “smart.” Our stream is not just waiting for a particular event. Instead, he expects any data changes in our program. This is exactly how the select function works in our asynchronous server:

Now you can return to the implementation of our asynchronous server and take a look at it with new knowledge. The first thing that catches your eye is the method of work. If in the first case our program was executed “from top to bottom”, then in the second case we operate on events. This approach to software development is called event-driven development.
Immediately it should be noted that this approach is not a silver bullet. He has a lot of flaws. First, such code is more difficult to maintain and change. Secondly, we have and always will be blocking calls that spoil everything. For example, in the program above, we used the function
print . The fact is that such a function also accesses the OS, therefore, our execution flow is blocked and other data sources wait patiently.
Conclusion
The choice of approach depends on the problem we are solving. Let the task itself choose the most productive approach. For example, the popular
Java web server
Tomcat uses streams. The equally popular
Nginx server uses the asynchronous approach. The creators of the popular
python python gunicorn web server
followed the prefork path.
Thank you for reading the article to the end! Next time (soon) I will talk about other possible non-blocking situations in the life of our programs. I will be glad to see you in the following posts.