In this article, I want to tell you about my solution to the problem of asynchronous javascript functionality, by the means of introducing a completely asynchronous model of computation. The concept itself will be described, and a link to the implementation will be given. Interested please under the cat.
Let's start with the main thing - synchronously or asynchronously? When there is a synchronous process of computing and asynchronous functionality without which it can not do, this is a problem. Moreover, when asynchrony is more advantageous and generally best practice this problem should be solved.
What is already there to solve? There are callbacks, there are promises. But promisses are modified callbacks and only simplify the problem, but do not solve it. For a real solution to the problem, it is necessary to reduce everything to one model - either completely synchronous or completely asynchronous.
In the latest standards, there appeared their promises, and then async / await, which together allows us to reduce the computational process to a completely synchronous model. And it would seem that the problem has been solved, but I have a number of complaints about this solution:
Let's forget about async / await, forget about promys, what should it look like? Conveniently, uniformly, with a minimum of additional code? Something like a function, only asynchronous:
That is, it would be good to turn the synchronous JS function into asynchronous. Let's make a list of steps to accomplish this task.
branch is always in the work, so that the stream of execution is guaranteed to be able to return to it in case of leaving.call
and back
.Let's try to portray it all. By condition, the definition of an asynchronous function should be exactly the same as synchronous.
function A () { }
Let the call
:
Let back
:
Here the first 2 points are performed:
function A ( top ) { }
Paragraph three, as already described above, a special method-wrapper back
, when called (team), goes one node back through the call tree. Thus, asynchronous return is performed. We will immediately agree that the synchronous return (return) is no longer taken into account, and the asynchronous call continues to exist even after a synchronous return.
From the above it becomes clear that there will be more than one call to a specific synchronous function turned into asynchronous:
call
We need a way to catch, or isolate pieces of code needed for each particular call, and transfer the flow of execution to that particular place. In addition, you need the ability to save data between sub-call returns, since the execution context disappears after each synchronous return. We introduce the following:
mark
be entered in top
call
set mark
to #
back
put a mark
equal to the function name from the current vertex (which is deleted)switch
handle the flow of executionback
), is written directly to top.x = 1
; function A ( top ) { switch ( top.mark ) { case '#': break; case 'B': break; case 'C': break; } }
Putting it all in place:
call
put the passed arguments in top.arg
back
put the transferred result in top.ret
function A ( top ) { switch ( top.mark ) { case '#': call(top, 'B', 'some_data'); break; case 'B': call(top, 'C', top.ret + 1); break; case 'C': back(top, {x: top.ret}); break; } }
From the example above it can be seen that a normal synchronous function is obtained, only stretched, and with the ability to wait for an asynchronous operation before returning. You can also notice the splitting into steps, and their sequential execution. Consider this, and the fact that the call tree is used, not the stack, and add parallelism:
size
field be entered in top
call
increase the size
current vertex by oneback
reduce the size
previous vertex by one (the current one is destroyed) function A ( top ) { switch ( top.mark ) { case '#': top.buf = []; call(top, 'B', 'some_data1'); call(top, 'B', 'some_data2'); call(top, 'B', 'some_data3'); break; case 'B': top.buf.push(top.ret); if ( !top.size ) { call(top, 'C', top.buf); } break; case 'C': back(top, top.ret); break; } }
It turns out that it is possible to start a massive parallel task, at any sequential step of the general asynchronous process of the function execution. It is also an opportunity to wait for the completion of this task and accumulate the result. Let's improve this, namely, we take into account the fact that the top.ret
result of a specific function is not always needed, and it would be nice to be able to run different functions in parallel in one task:
group
field be entered in top
mark
be replaced by group['#name']
size
be replaced by group['#size']
call
call
increase top.group[groupMark]
current vertex by oneback
decrease top.group[groupMark]
previous vertex by one (the current one is destroyed)call
and back
control the value of the special names top.group['#name']
and top.group['#size']
containing the name and size of the current group function A ( top ) { switch ( top.group['#name'] ) { case '#': top.buf = []; call(top, '#group1', 'B1', 'some_data1'); call(top, '#group1', 'B2', 'some_data2'); call(top, '#group1', 'B'3, 'some_data3'); break; case '#group1': top.buf.push(top.ret); if ( !top.size ) { call(top, '#group2', 'C', top.buf); } break; case '#group2': back(top, top.ret); break; } }
Add another opportunity to wait for the end of several running groups, and that's it :)
top.group['##size']
contain the sum of all top.group['#size']
top.group['##name']
containing the name of the group with which this function was called (return group name) be entered function A ( top ) { switch ( top.group['#name'] ) { case '#': top.listB = []; top.listC = []; call(top, '#group1', 'B1', 'some_data1'); call(top, '#group1', 'B1', 'some_data1'); call(top, '#group1', 'B2', 'some_data2'); call(top, '#group2', '1', 'some_data1'); call(top, '#group2', '1', 'some_data1'); call(top, '#group2', '2', 'some_data2'); break; case '#group1': if (top.ret) {top.listB.push(top.ret);} if ( !top.group['##size'] ) { back(top, {B: top.listB, C: top.listC}); } break; case '#group2': if (top.ret) {top.listC.push(top.ret);} if ( !top.group['##size'] ) { back(top, {B: top.listB, C: top.listC}); } break; } }
The above describes the concept of an asynchronous function, which allows for the introduction of a fully asynchronous computation model, and in doing so:
I have developed a fairly successful implementation of this concept , with an emphasis on parallel computing. A key feature is stream support (WebWorker) and the possibility of asynchronous function calls, no matter what stream they are in.
Source: https://habr.com/ru/post/347250/
All Articles