📜 ⬆️ ⬇️

Win the Android Camera2 API with RxJava2 (part 2)

image

This is the second part of the article , in which I show how using RxJava2 helps build logic over an asynchronous API. As such an interface, I chose the Android Camera2 API (and did not regret it!). This API is not only asynchronous, but also fraught with unobvious implementation features that are not described anywhere. So the article will be of double benefit to the reader.

Who is this post for? I hope that the reader is an experienced, but still an inquisitive Android developer. Basic knowledge of reactive programming (a good introduction is here ) and understanding of Marble Diagrams are highly desirable. The post will be useful to those who want to penetrate the reactive approach, as well as those who plan to use the Camera2 API in their projects.
')
Project sources can be found on GitHub .

Reading the first part is required!

Formulation of the problem


At the end of the first part, I promised that I would reveal the question of waiting for autofocus / auto exposure.

Let me remind you, the chain of operators looked like this:

Observable.combineLatest(previewObservable, mOnShutterClick, (captureSessionData, o) -> captureSessionData) .firstElement().toObservable() .flatMap(this::waitForAf) .flatMap(this::waitForAe) .flatMap(captureSessionData -> captureStillPicture(captureSessionData.session)) .subscribe(__ -> {}, this::onError) 

So, what do we want from the waitForAe and waitForAf ? In order for the autofocus / auto exposure processes to start, and upon their completion we would receive a notification of readiness for the image.

For this, both methods need to return Observable , which emits an event when the camera reports that the convergence process has triggered (in order not to repeat the words “autofocus” and “auto exposure”, then I will use the word “convergence”). But how to start and control this process?

Those unobvious features of the Camera2 API pipeline


At first I thought that it was enough to call the capture with the necessary flags and wait for the CaptureCallback call onCaptureCompleted .

It seems logical: they launched a request, waited for execution - it means that the request is executed. And this code even went into production.

But then we noticed that on some devices in very dark conditions, even when the flash is triggered, the photos are out of focus and darkened. At the same time, the system camera worked perfectly, although it took much more time to prepare for the snapshot. I began to suspect that in my case, autofocus did not have time to focus by the time of onCaptureCompleted .

To check my thesis, I added a delay per second - and the pictures began to turn out! It is clear that I could not be satisfied with this decision, and I began to look for how you can really understand that autofocus worked and you can continue. Documentation on this topic could not be found, and I had to turn to the system camera sorts, since they are available as part of the Android Open Source Project . The code turned out to be extremely unreadable and confusing, I had to add logging and analyze the camera logs when shooting in the dark. And I found that after capture with the necessary flags, the system camera calls setRepeatingRequest to continue the preview and waits until the onCaptureCompleted comes with a certain set of flags in TotalCaptureResult . The right answer could come in a few onCaptureCompleted !

When I realized this feature, the behavior of the Camera2 API seemed logical. But how much effort it took to find this information! Well, now you can go to the description of the solution.

So, our action plan:


Go!

Checkboxes


Create a ConvergeWaiter class with the following fields:

 private final CaptureRequest.Key<Integer> mRequestTriggerKey; private final int mRequestTriggerStartValue; 

This is the key and value of the flag that will trigger the necessary convergence process when you call capture .

For autofocus, these will be CaptureRequest.CONTROL_AF_TRIGGER and CameraMetadata.CONTROL_AF_TRIGGER_START respectively. For auto exposure - CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER and CameraMetadata.CONTROL_AE_PRECAPTURE_TRIGGER_START respectively.

 private final CaptureResult.Key<Integer> mResultStateKey; private final List<Integer> mResultReadyStates; 

This is the key and the set of expected flag values ​​from the onCaptureCompleted result. When we see one of the expected key values, we can assume that the convergence process is complete.

For autofocus, the key value is CaptureResult.CONTROL_AF_STATE , a list of values:

 CaptureResult.CONTROL_AF_STATE_INACTIVE, CaptureResult.CONTROL_AF_STATE_PASSIVE_FOCUSED, CaptureResult.CONTROL_AF_STATE_FOCUSED_LOCKED, CaptureResult.CONTROL_AF_STATE_NOT_FOCUSED_LOCKED; 

for auto exposure, the key value is CaptureResult.CONTROL_AE_STATE , a list of values:

 CaptureResult.CONTROL_AE_STATE_INACTIVE, CaptureResult.CONTROL_AE_STATE_FLASH_REQUIRED, CaptureResult.CONTROL_AE_STATE_CONVERGED, CaptureResult.CONTROL_AE_STATE_LOCKED. 

Do not ask me how I found out! Now we can create ConvergeWaiter instances for autofocus and exposure, for this we will make a factory:

 static class Factory {   private static final List<Integer> afReadyStates = Collections.unmodifiableList(       Arrays.asList(           CaptureResult.CONTROL_AF_STATE_INACTIVE,           CaptureResult.CONTROL_AF_STATE_PASSIVE_FOCUSED,           CaptureResult.CONTROL_AF_STATE_FOCUSED_LOCKED,           CaptureResult.CONTROL_AF_STATE_NOT_FOCUSED_LOCKED       )   );   private static final List<Integer> aeReadyStates = Collections.unmodifiableList(       Arrays.asList(           CaptureResult.CONTROL_AE_STATE_INACTIVE,           CaptureResult.CONTROL_AE_STATE_FLASH_REQUIRED,           CaptureResult.CONTROL_AE_STATE_CONVERGED,           CaptureResult.CONTROL_AE_STATE_LOCKED       )   );   static ConvergeWaiter createAutoFocusConvergeWaiter() {       return new ConvergeWaiter(           CaptureRequest.CONTROL_AF_TRIGGER,           CameraMetadata.CONTROL_AF_TRIGGER_START,           CaptureResult.CONTROL_AF_STATE,           afReadyStates       );   }   static ConvergeWaiter createAutoExposureConvergeWaiter() {       return new ConvergeWaiter(           CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER,           CameraMetadata.CONTROL_AE_PRECAPTURE_TRIGGER_START,           CaptureResult.CONTROL_AE_STATE,           aeReadyStates       );   } } 

capture / setRepeatingRequest


To call capture / setRepeatingRequest we need:


Create a method

 Single<CaptureSessionData> waitForConverge(@NonNull CaptureSessionData captureResultParams, @NonNull CaptureRequest.Builder builder) 

In the second parameter, we will pass the builder configured for the preview. Therefore, CaptureRequest for previews can be created immediately by calling CaptureRequest previewRequest = builder.build();

To create a CaptureRequest to run the convergence procedure, add a flag to the builder that will start the necessary convergence process:

 builder.set(mRequestTriggerKey, mRequestTriggerStartValue); CaptureRequest triggerRequest = builder.build(); 

And we will use our methods to get Observable from the methods of capture / setRepeatingRequest :

 Observable<CaptureSessionData> triggerObservable = CameraRxWrapper.fromCapture(captureResultParams.session, triggerRequest); Observable<CaptureSessionData> previewObservable = CameraRxWrapper.fromSetRepeatingRequest(captureResultParams.session, previewRequest); 

Chaining operators


Now we can form a jet stream in which there will be events from both Observable using the merge operator.



 Observable<CaptureSessionData> convergeObservable = Observable   .merge(previewObservable, triggerObservable) 

The resulting convergeObservable will emit events with the results of the onCaptureCompleted calls.

We need to wait until the CaptureResult passed to this method contains the expected value of the flag. To do this, create a function that takes CaptureResult and returns true if it has the expected flag value:

 private boolean isStateReady(@NonNull CaptureResult result) {   Integer aeState = result.get(mResultStateKey);   return aeState == null || mResultReadyStates.contains(aeState); } 

Checking for null needed for the crooked implementations of the Camera2 API, so as not to hang forever.

Now we can use the filter operator to wait for the event for which isStateReady is executed:


     .filter(resultParams -> isStateReady(resultParams.result)) 

We are interested only in the first such event, therefore we add

     .first(captureResultParams); 

Fully reactive stream looks like this:

 Single<CaptureSessionData> convergeSingle = Observable .merge(previewObservable, triggerObservable) .filter(resultParams -> isStateReady(resultParams.result)) .first(captureResultParams); 

In case the convergence process takes too long or something goes wrong, we introduce a timeout:

 private static final int TIMEOUT_SECONDS = 3; Single<CaptureSessionData> timeOutSingle = Single   .just(captureResultParams)   .delay(TIMEOUT_SECONDS, TimeUnit.SECONDS, AndroidSchedulers.mainThread()); 

The delay operator reissues events with a specified delay. By default, it does this in a stream belonging to the computation scheduler, so we drop it into Main Thread using the last parameter.

Now let's combine convergeSingle and timeOutSingle , and whoever is the first to emit an event, he won:

 return Single   .merge(convergeSingle, timeOutSingle)   .firstElement()   .toSingle(); 

Full function code:

 @NonNull Single<CaptureSessionData> waitForConverge(@NonNull CaptureSessionData captureResultParams, @NonNull CaptureRequest.Builder builder) { CaptureRequest previewRequest = builder.build(); builder.set(mRequestTriggerKey, mRequestTriggerStartValue); CaptureRequest triggerRequest = builder.build(); Observable<CaptureSessionData> triggerObservable = CameraRxWrapper.fromCapture(captureResultParams.session, triggerRequest); Observable<CaptureSessionData> previewObservable = CameraRxWrapper.fromSetRepeatingRequest(captureResultParams.session, previewRequest); Single<CaptureSessionData> convergeSingle = Observable .merge(previewObservable, triggerObservable) .filter(resultParams -> isStateReady(resultParams.result)) .first(captureResultParams); Single<CaptureSessionData> timeOutSingle = Single .just(captureResultParams) .delay(TIMEOUT_SECONDS, TimeUnit.SECONDS, AndroidSchedulers.mainThread()); return Single .merge(convergeSingle, timeOutSingle) .firstElement() .toSingle(); } 

waitForAf / waitForAe


The main part of the work is done, it remains only to create instances:

 private final ConvergeWaiter mAutoFocusConvergeWaiter = ConvergeWaiter.Factory.createAutoFocusConvergeWaiter(); private final ConvergeWaiter mAutoExposureConvergeWaiter = ConvergeWaiter.Factory.createAutoExposureConvergeWaiter(); 

and use them:

 private Observable<CaptureSessionData> waitForAf(@NonNull CaptureSessionData captureResultParams) {   return Observable       .fromCallable(() -> createPreviewBuilder(captureResultParams.session, mSurface))       .flatMap(           previewBuilder -> mAutoFocusConvergeWaiter               .waitForConverge(captureResultParams, previewBuilder)               .toObservable()       ); } @NonNull private Observable<CaptureSessionData> waitForAe(@NonNull CaptureSessionData captureResultParams) {   return Observable       .fromCallable(() -> createPreviewBuilder(captureResultParams.session, mSurface))       .flatMap(           previewBuilder -> mAutoExposureConvergeWaiter               .waitForConverge(captureResultParams, previewBuilder)               .toObservable()       ); } 

The main point here is the use of the fromCallable operator. It may be tempting to use the operator just . For example:

 just(createPreviewBuilder(captureResultParams.session, mSurface)). 

But in this case, the createPreviewBuilder function will be called right at the time of the waitForAf call, and we want it to be called only when a subscription to our Observable appears.

Conclusion


As you know, the most valuable part of any article on Habré is comments! Therefore, I urge you to actively share your thoughts, comments, valuable knowledge and links to more successful implementations in the comments.

Project sources can be found on GitHub . Pulrequests welcome!

Source: https://habr.com/ru/post/352318/


All Articles