📜 ⬆️ ⬇️

Modern approach to competition in Android: Korutin in Kotlin

Hi, Habr!

We remind you that we have already opened a pre-order for the long-awaited book about the Kotlin language from the famous Big Nerd Ranch Guides series. Today we decided to bring to your attention a translation of an article telling about Kotlin's quilts and the correct work with streams in Android. The topic is being discussed very actively, so for the sake of completeness we also recommend viewing this article from Habr and this detailed post from the Axmor Software blog.

A modern Java / Android framework for concurrency creates callback hell and leads to blocking states, since Android does not have a simple enough way to guarantee thread safety.

Kotlin Korutin is a very efficient and complete toolkit that makes managing competitiveness much easier and more productive.
')
Suspending and blocking: what's the difference

Cortinas do not replace threads, but rather provide a framework for managing them. The coruntine philosophy is to define a context that allows one to wait until the background operations have completed without blocking the main stream.

The goal of Corutin in this case is to avoid callbacks and simplify competition.

Simplest example

To begin with, let's take the simplest example: let's run coruntine in the context of Main (main thread). In it, we will extract the image from the IO stream and send this image back to Main for processing.

 launch(Dispatchers.Main) { val image = withContext(Dispatchers.IO) { getImage() } //    IO imageView.setImageBitmap(image) //     } 

The code is simple as a single-threaded function. Moreover, while getImage is executed in a dedicated pool of IO threads, the main thread is free and can take on any other task! The withContext function suspends the current coruntine while its action is running ( getImage() ). As soon as getImage() returns and the looper from the main thread becomes available, the coroutine will resume work in the main thread and call imageView.setImageBitmap(image) .

The second example: now we need to have 2 background tasks performed so that they can be used. We will use the async / await duet to make these two tasks run in parallel, and use their result in the main thread as soon as both tasks are ready:

 val job = launch(Dispatchers.Main) { val deferred1 = async(Dispatchers.Default) { getFirstValue() } val deferred2 = async(Dispatchers.IO) { getSecondValue() } useValues(deferred1.await(), deferred2.await()) } job.join() //    ,      

async is similar to launch , but returns deferred (Kotlin entity equivalent to Future ), so its result can be obtained using await() . When called without parameters, it works in the default context for the current scope.

Again, the main thread remains free while we wait for our 2 values.
As you can see, the launch function returns a Job , which can be used to wait until the operation completes — this is done using the join() function. It works like in any other language, with the proviso that it simply suspends coruntine, and does not block the flow .

Dispatching

Dispatching is a key concept when working with Corutin. This is an action that allows you to "jump" from one thread to another.

Consider how the equivalent for dispatching in Main looks in java, that is,

 runOnUiThread: public final void runOnUiThread(Runnable action) { if (Thread.currentThread() != mUiThread) { mHandler.post(action); //  } else { action.run(); //   } } 

The implementation of the Main context for Android is a Handler based dispatcher. So this is really a very suitable implementation:

 launch(Dispatchers.Main) { ... } vs launch(Dispatchers.Main, CoroutineStart.UNDISPATCHED) { ... } //   kotlinx 0.26: launch(Dispatchers.Main.immediate) { ... } 

launch(Dispatchers.Main) sends Runnable to Handler , so its code is not executed immediately.

launch(Dispatchers.Main, CoroutineStart.UNDISPATCHED) will immediately execute its lambda expression in the current thread.

Dispatchers.Main ensures that when the corutin resumes work, it will be sent to the main stream ; In addition, Handler is used here as the native Android implementation for sending app events to the loop.

The exact implementation looks like this:

 val Main: HandlerDispatcher = HandlerContext(mainHandler, "Main") 

Here is a good article to help you understand the subtleties of dispatching in Android:
Understanding Android Core: Looper, Handler, and HandlerThread .

Context korutiny

The korutina context (also known as the korutina manager) determines in which thread its code will run, what to do if an exception is thrown, and accesses the parent context to propagate the cancellation.

 val job = Job() val exceptionHandler = CoroutineExceptionHandler { coroutineContext, throwable -> whatever(throwable) } launch(Disaptchers.Default+exceptionHandler+job) { ... } 

job.cancel() will cancel all coroutines whose parent is job . A exceptionHandler will receive all the exceptions thrown in these quiches.

Area of ​​visibility

The coroutineScope interface simplifies error handling:
If any of its subsidiary Corutin fails, then all scope and all subsidiary Corutins will fail.

In the async example, if the value could not be retrieved, another task continued to work - we have a damaged state, and something needs to be done about it.

When working with coroutineScope , the useValues function will be called only if the retrieval of both values ​​was successful. Also, if deferred2 fails, deferred1 will be canceled.

 coroutineScope { val deferred1 = async(Dispatchers.Default) { getFirstValue() } val deferred2 = async(Dispatchers.IO) { getSecondValue() } useValues(deferred1.await(), deferred2.await()) } 

You can also “put in the scope” a whole class to set its default CoroutineContext context and use it.

An example of a class that implements the CoroutineScope interface:

 open class ScopedViewModel : ViewModel(), CoroutineScope { protected val job = Job() override val coroutineContext = Dispatchers.Main+job override fun onCleared() { super.onCleared() job.cancel() } } 

Running Corutin in CoroutineScope :

The launch or async default dispatcher now becomes the dispatcher of the current scope.

 launch { val foo = withContext(Dispatchers.IO) { … } // -    CoroutineContext   … } launch(Dispatchers.Default) { // -        … } 

Autonomous launch of the coroutine (outside of any CoroutineScope):

 GlobalScope.launch(Dispatchers.Main) { // -    . … } 

You can even define the scope for an application by setting the default Main Manager:

 object AppScope : CoroutineScope by GlobalScope { override val coroutineContext = Dispatchers.Main.immediate } 

Remarks


We get rid of locks and callbacks using channels

Channel definition from the JetBrains documentation:

Channel conceptually very similar to BlockingQueue . The key difference is that it does not block the put operation, it provides a suspending send (or nonblocking offer ), and instead of blocking a take operation it provides a suspending receive .


Actors

Consider a simple tool for working with channels: Actor .

Actor , again, is very similar to Handler : we define the context of the korutina (that is, the stream in which we are going to perform the actions) and work with it in a sequential order.

The difference, of course, is that cortinas are used here; You can specify the power, and the executable code - pause .

Basically, actor will redirect any command to the cortina channel. It guarantees the execution of a command and restricts operations in its context . This approach perfectly helps to get rid of synchronize calls and keep all streams free!

 protected val updateActor by lazy { actor<Update>(capacity = Channel.UNLIMITED) { for (update in channel) when (update) { Refresh -> updateList() is Filter -> filter.filter(update.query) is MediaUpdate -> updateItems(update.mediaList as List<T>) is MediaAddition -> addMedia(update.media as T) is MediaListAddition -> addMedia(update.mediaList as List<T>) is MediaRemoval -> removeMedia(update.media as T) } } } //  fun filter(query: String?) = updateActor.offer(Filter(query)) //  suspend fun filter(query: String?) = updateActor.send(Filter(query)) 

In this example, we use the sealed Kotlin classes, choosing which action to perform.

 sealed class Update object Refresh : Update() class Filter(val query: String?) : Update() class MediaAddition(val media: Media) : Update() 

Moreover, all these actions will be put in a queue, they will never be executed in parallel. This is a convenient way to limit the variability .

Android + Cortina Life Cycle

Actors can be very useful for controlling the user interface of Android, simplify the cancellation of tasks and prevent overloading the main stream.
Let's implement this and call job.cancel() when destroying activity.

 class MyActivity : AppCompatActivity(), CoroutineScope { protected val job = SupervisorJob() //  Job    override val coroutineContext = Dispatchers.Main.immediate+job override fun onDestroy() { super.onDestroy() job.cancel() //      } } 

The SupervisorJob class is similar to the normal Job with the only exception that cancellation only extends in the downstream direction.

Therefore, we do not cancel all of the Korutin in the Activity , when one of them refuses.

Slightly better is the extension function , which allows access to this CoroutineContext from any View in CoroutineScope .

 val View.coroutineContext: CoroutineContext? get() = (context as? CoroutineScope)?.coroutineContext 

Now we can combine all of this, the setOnClick function creates a combined actor to control its onClick actions. In the case of multiple clicks, intermediate actions will be ignored, thus eliminating ANR errors (the application is not responding), and these actions will be performed within the scope of the Activity . Therefore, if you destroy the activity, all this will be canceled.

 fun View.setOnClick(action: suspend () -> Unit) { //         val scope = (context as? CoroutineScope)?: AppScope val eventActor = scope.actor<Unit>(capacity = Channel.CONFLATED) { for (event in channel) action() } //       setOnClickListener { eventActor.offer(Unit) } } 

In this example, we set the value for the Channel to be Conflated , so that it ignores part of the events if there are too many of them. You can replace it with Channel.UNLIMITED if you prefer to queue events without losing any of them, but you still want to protect the application from ANR errors.

You can also combine cortices and lifecycle frameworks to automate the cancellation of tasks associated with the user interface:

 val LifecycleOwner.untilDestroy: Job get() { val job = Job() lifecycle.addObserver(object: LifecycleObserver { @OnLifecycleEvent(Lifecycle.Event.ON_DESTROY) fun onDestroy() { job.cancel() } }) return job } //  GlobalScope.launch(Dispatchers.Main, parent = untilDestroy) { /*    ! */ } 

Simplify callbacks (part 1)

Here's how to transform the use of call-based APIs with Channel .

The API works like this:

  1. requestBrowsing(url, listener) initiates a parsing of the folder located at url .
  2. The listener receives onMediaAdded(media: Media) for any media file found in this folder.
  3. listener.onBrowseEnd() is called upon completion of the folder parsing

Here is the old refresh function in the content provider for the VLC browser:

 private val refreshList = mutableListOf<Media>() fun refresh() = requestBrowsing(url, refreshListener) private val refreshListener = object : EventListener{ override fun onMediaAdded(media: Media) { refreshList.add(media)) } override fun onBrowseEnd() { val list = refreshList.toMutableList() refreshList.clear() launch { dataset.value = list parseSubDirectories() } } } 

How to improve it?

Create a channel that will run in refresh . Now browser callbacks will only send media to this channel, and then close it.

Now the refresh function has become clearer. It creates a channel, calls the VLC browser, then builds a list of media files and processes it.

Instead of the select or consumeEach you can use for to wait for the media, and this cycle will break as soon as the browserChannel channel closes.

 private lateinit var browserChannel : Channel<Media> override fun onMediaAdded(media: Media) { browserChannel.offer(media) } override fun onBrowseEnd() { browserChannel.close() } suspend fun refresh() { browserChannel = Channel(Channel.UNLIMITED) val refreshList = mutableListOf<Media>() requestBrowsing(url) //        for (media in browserChannel) refreshList.add(media) //   dataset.value = refreshList parseSubDirectories() } 

Simplify callbacks (part 2): Retrofit

The second approach: we do not use kotlinx cortuins at all, but we use a corutin core framework.

See how the Korutins actually work!

The retrofitSuspendCall function wraps the request for a Retrofit Call to make it a suspend function.

With the help of suspendCoroutine we call the Call.enqueue method and suspend the coruntine. The callback provided in this way will apply to continuation.resume(response) to resume the quortenine response from the server as soon as it is received.

Next, we just have to combine our Retrofit functions into retrofitSuspendCall , in order to use them to return query results.

 suspend inline fun <reified T> retrofitSuspendCall(request: () -> Call <T> ) : Response <T> = suspendCoroutine { continuation -> request.invoke().enqueue(object : Callback<T> { override fun onResponse(call: Call<T>, response: Response<T>) { continuation.resume(response) } override fun onFailure(call: Call<T>, t: Throwable) { continuation.resumeWithException(t) } }) } suspend fun browse(path: String?) = retrofitSuspendCall { ApiClient.browse(path) } //  (   Main) livedata.value = Repo.browse(path) 

Thus, the call blocking the network is made in the selected Retrofit stream, the quortenine is here, waiting for a response from the server, and there is no place to use it in the application!

This implementation is inspired by the gildor / kotlin-coroutines-retrofit library .

There is also a JakeWharton / retrofit2-kotlin-coroutines-adapter with a different implementation, giving a similar result.

Epilogue

Channel can be used in many other ways; Look at BroadcastChannel for more powerful implementations that may be useful to you.

You can also create channels using the Produce function.

Finally, using channels, it is convenient to organize communication between UI components: the adapter can transmit click events to its fragment / activity via the Channel or, for example, via Actor .

Source: https://habr.com/ru/post/457224/


All Articles