📜 ⬆️ ⬇️

AsyncIO Micropython: Synchronization Methods in Asynchronous Programming

In sequential programming, I constantly encounter an obvious desire not to stop the program at a time when the goal of individual tasks (processes) is periodic actions - for example, polling sensor values, or transmitting data on a schedule to a server, or input / output a large amount of data. The simplest, of course, is to wait for the completion of a periodic event and then, slowly, continue to perform other tasks.

while True: do_ext_proc_before() do_internal_proc() sleep(5) do_ext_proc_after() 

You can abandon 'sleep ()' and turn on checking of some conditions in the loop, which will make it possible not to delay the main loop at least until the onset of a periodic event:

  start = time() set_timer(start,wait=5) #   set_timeout(start,wait_to=7) #   set_irq(alarm) #    while True: curTime = time() do_ext_proc_before() if timer(curTime) or timeout(curTime) or alarm: # if all events crazy start simultaneously - reset all start = time() set_timer(start,wait=5) #   set_timeout(start,wait_to=7) #   set_irq(alarm) #    do_internal_proc() do_ext_proc_after() 

In asynchronous programming, each task becomes an independent process and is executed, depending on the specific implementation, in parallel or pseudo-parallel, using an internal understanding of natural or artificially established conditions for waiting or using a limited resource, such as a disk or a communication channel.

  setTask(do_ext_proc_before()) setTask(do_internal_proc(),timer=5,timeout=7,alarm_handler=alarm) setTask(do_ext_proc_after()) runTasks() 

Now there is a task that does not exist in sequential programming - what to do if it is necessary to synchronize some processes during their asynchronous
doing? For example, after receiving data from sensors, initiate the process of sending data to the server or respond to an emergency. In asynchronous programming, the organization of asynchronous I / O is organically solved in the language standard, and other situations are resolved in libraries.
')
I studied this question using the extended asyncio Micropython library published
Peter Hinch ( https://github.com/peterhinch/micropython-async/blob/master/TUTORIAL.md )
The simplest solution is to signal an event to interested processes. For this is the class Event (), which contains several modules

  Event.Set( timeout = None, data = None ) -    (Event = True), ,     , Event.IsSet() - ,   ,  True,    False   Event.Wait() -   ,     - Done,Timeout,Cancel Event.Data() -  ,     Event.Clear() -   (Event = False). 

Completion is usually fixed by a process waiting for an event to occur, for example, the process of displaying on the screen or the process of saving data to disk or timeout, then there is no need to update or save the data, since they are not updated for any reason, or the reason for its interruption due to the occurrence of another important event, for example, switching to sleep mode or rebooting, which may require the release of all pending processes by resetting the corresponding events.

It should be borne in mind that Event.Clear () is advisable to do only one process, if it does not contradict the specified algorithm. Otherwise, if several processes expect the occurrence of an Event.Set () event, it is assumed that one of the interested processes must perform the Event.Clear () process, having only made sure that all the interested processes have responded to the event that has arisen. This complicates the decision logic when using the Event-Class while waiting for an event by several processes. This situation is solved by setting a certain amount of Clear () to the event that occurred.

  Barrier.Set( quantity = 1, timeout = None, data = None ) - quantity = 1  Event.Set() Barrier.IsSet() - ,   ,  True,    False   Barrier.Wait() -   ,     - Done,Timeout,Cancel Barrier.Data() -  ,     Barrier.qty -      Barrier.Clear() -   (Event = False),        Barrier.quantity  ,    ,     

It does not keep track of which process has already responded, and which has not yet, which may give rise to the problem of repeated response to an event, if this is essential for a given algorithm. If instead of sending Barrier.quantity a list of names of interested processes, such a conflict can be avoided. Also, in the event of a timeout or interruption of an event, you can determine which specific pending processes have not yet worked. All of the above applies to a situation where one or more processes are awaiting the onset of an event, or a one-to-many situation. This occurs when the process or processes of do_ext_proc_after () with sequential programming would be executed only after the completion of do_internal_proc (). For the convenience of further understanding, we will expand the existing Event-Class and Barrier-Class into the new EEvent-Class and make it or objects generated by it global. Here, 'creators' is the name or list of names of processes that trigger the occurrence of an event or unlock a resource, 'folowers' is the name or list of names of processes waiting for an event or unlock a resource

  EEvent.Set (creators, folowers, timeout = None, data = None ) -  True,        EEvent.IsSet( procName ) - procName -   ID   EEvent.Wait( procName ) EEvent.Clear( procName ) EEvent.Folowers() -    ,      . Barrier.qty = len(EEvent.List()) EEvent.Creators() -   ,     

Using modules EEvent-Class, you can describe the solution of the previously discussed problem.

  def do_internal_proc(): ... EEvent.Set ('p_Creator',('p_Folwer1','p_Folwer2')) # exec 'p_Folwer1','p_Folwer2' after event is come in 'p_Creator' ... def do_ext_proc_after1() ... EEvent.Wait('p_Creator') ... EEvent.Clear('p_Folwer1') def do_ext_proc_after1() ... EEvent.Wait('p_Creator') ... EEvent.Clear('p_Folwer2') 

Consider the opposite situation - when one process is waiting for the completion of several events, or the 'many to one' situation. In other words, what if the execution of do_internal_proc () can be only after the execution of do_ext_proc_before () In the limiting case, when one process is waiting for the completion / occurrence of one event, the problem can be solved by applying the Event-class. When the expected completion of several events, for example, only after displaying the received data and sending them to the server, saving them to disk, it is necessary that each process executed establish its participation in the expected event and wait until all the processes have completed.

  def do_ext_proc_before1() ... EEvent.Set('p_Creator1','p_Folwer') ... def do_ext_proc_before2() ... EEvent.Set('p_Creator2','p_Folwer') ... def do_internal_proc(): ... EEvent.Wait(('p_Creator1','p_Creator2')) ... EEvent.Clear('p_Folwer') 

Another important aspect of asynchronous programming is sharing a limited resource. For example, data should be updated only by one process, other processes claiming a similar action should queue up or wait until the data is updated. At the same time, it is possible that reading data for display or transfer will not be critical. Therefore, it is necessary to know the list of competing processes when organizing relevant events.

In the standard asynchronous programming, this problem is solved by the Lock-Class modules. In the general case, the problem can also be solved similarly to the one-to-many situation.

  def do_internal_proc(): # lock activity all 'followers' in list ... EEvent.Set ('p_Creator',('p_Folwer1','p_Folwer2')) # exec 'p_Folwer1','p_Folwer2' after event is come in 'p_Creator' ... def do_ext_proc_after1() ... EEvent.Wait('p_Creator') # waiting for recourse releale if ( EEvent.Set ('p_Folwer1','p_Folwer2')): # lock resource 'p_Folower1' now is 'p_Creator' ... else: EEvent.Wait('p_Folower2') # continue waiting for recourse releale ... EEvent.Clear('p_Folwer1') # releafe recourse def do_ext_proc_after1() ... EEvent.Wait('p_Creator') if ( EEvent.Set ('p_Folwer2','p_Folwer1')): # lock resource 'p_Folower2' now is 'p_Creator' ... else: EEvent.Wait('p_Folower1') # continue waiting for recourse releale ... EEvent.Clear('p_Folwer2') # releafe recourse 

In addition to the options considered, there are solutions that limit throughput, queues, and controlled dispatching of processes, but my activity has not yet become necessary and, as a result, the need for sufficient understanding, although I do not rule out that there are more elegant or economical solutions.

In conclusion, I want to say that the sequential and asynchronous approaches have an equal right to exist and successfully implement the specified algorithms. Therefore, the use of one approach or another is determined by the priorities of the creator — what is more significant for him when implementing the specified algorithms — transparency and readability, speed, or volume of the resulting code.

Source: https://habr.com/ru/post/442268/


All Articles