
The easiest way to step on a rake is to use asynchrony. I am familiar with programmers who have proven themselves to be strong professionals who literally succumb to multithreading. For starters, I will tell my favorite story about deadlock (I apologize for the slaughter, but it is painfully good). About ten years ago, the
Associated Press told the world how a pilot tried to land a passenger plane at the airport in the Swedish city of Christianstad, but none of the dispatchers responded to his request. It turned out that the dispatcher had not yet returned from vacation. As a result, the plane circled over the airport until an emergency dispatcher was urgently called in, which
put the plane in half an hour. Debriefing showed that the reason was the lateness of the aircraft. On board which was the same dispatcher, hurrying to work from vacation.
So, when we are confronted with asynchrony, we have to break the familiar picture in our head: the world around us is subjectively one-point. If we sent a letter, and after a week received an answer, for us everything happens within one stream; we do not have to be responsible for the actions of the respondent and the postman. And our code has to.
To simplify the life of a programmer, you can use the
Reactor pattern. The best (in my opinion) its implementation for Ruby is
EventMachine . But with her there are not obvious moments. I plan to briefly tell about one of them.
Eventmachine
gem install eventmachine
The
EventMachine
class
EventMachine
more or less
documented and it’s easy to figure out simple queries. Usually it happens something like this (
EM
is an alias for
EventMachine
):
begin EM.run do …
At the reactor shutdown, you can hang the hook (
EventMachine.add_shutdown_hook {puts "Exiting ..."} ). Well, of course, you can create
asynchronous connections on the fly. Documentation, again, is. Mostly even intelligible.
But enough tediousness.
')
Collection of results
As long as everything is limited to the “request → response processing” model, there are no problems. But what if we need to send the next request, depending on the result of the previous one? In order not to make the note too long and not to chew very simple moments, I will immediately proceed to the task:
find the component we need on the server through the discovery server and communicate with it
In words, it happens something like this: we send a request to the discovery, we get a list of components, we poll each component for its capabilities. If there is ours in the list of features, we carry out initialization.
Here’s what it looks like with
EventMachine
(I’ve removed everything that doesn’t directly concern working with
EM
):
@stream.write_with_handler(disco) do |result| …
All the work will be done for us by iterators and their magic
map function. The lambda code under the last parentheses (comment “yielding”) will be executed only when all the discoinfo for each component found are collected.
I apologize if it will seem obvious to someone, but Google didn’t give me a quick solution, and the fuss with
Fiber
’s here turns into pure hell.