
Hi% username%! Today I want to share the experience of developing a single Android application and the difficulties that I had to face when using the camera in a not quite fair way.
The idea of ​​the “Guardian” application lived within the development department for a long time, but the first implementation appeared on the Symbian platform 2 years ago. The very idea is simple - to take photos of the person who took the phone in his hands. In the first implementation, the application was divided into signal modules and callback modules. Signal modules were responsible for recording changes in a certain state of the phone. For example: removing or installing a SIM or memory card, incoming or outgoing call, or very tricky - the main sensor was the accelerometer sensor, which detected the moment when the phone was lifted from the table. Callback modules are actions that are performed by sensor signals. Were implemented photography and sound recording.
When porting the application to the Android platform, the approach has changed noticeably. And in general, only the idea remained from the old application, it ceased to be modular, and of all the functionality, only the photographing functionality remained. I want to tell about the implementation of this functionality.
Make a photo
First I will give a free translation of official documentation concerning the issue of using the camera.
- For photos in Android meets the class Camera .
<uses-permission android:name="android.permission.CAMERA" /> <uses-feature android:name="android.hardware.camera" /> <uses-feature android:name="android.hardware.camera.autofocus" />
To get the picture you need:
- Find the Id of the desired camera using the getNumberOfCameras and getCameraInfo methods );
- Get a link to the camera object using the open method.
- Get the current (default) settings by the getParameters method.
- If necessary, change the parameters and set them again with the setParameters method;
- If necessary, set the camera orientation using the setDisplayOrientation method (NO vertical video!);
- IMPORTANT: Pass a properly initialized SurfaceHolder object to the setPreviewDisplay method . If this is not done, the camera will not be able to start the preview.
- IMPORTANT: Call the startPreview method), which will begin to update SurfaceHolder. You MUST begin a preview before taking a photo.
- Finally call the takePicture method and wait for the data to return to onPictureTaken ;
- After calling the takePicture method, the preview will be stopped. If you need to take another photo, you will have to call startPreview again;
- If the camera is no longer needed, you must first stop the preview with the stopPreview method;
- IMPORTANT: Call the release () method to free up camera resources for other applications. The application should immediately release camera resources in the onPause method (and get them back in the onResume method).
This class is not thread safe. Most operations (preview, focus, photo taking) are asynchronous and return the result via callbacks that will be called on the same thread as the open method. In no case should methods of this class be called from multiple threads at once.
Warning: Different devices on the Android OS may have different camera capabilities (for example, resolution, autofocus capability, etc.).
Here the translation ends and the fun begins.
From all of the above, the following problems are striking:
- It is necessary to show a preview.
- On different devices, the camera can work in different ways.
With them, we will fight.
When a problem arises from the category “it is written in the docks that it is impossible to do this,” the first thing to look at is the source code. It became clear from them that the drawing of the preview was brought up to the level of the native code setPreviewDisplay (Surface). An attempt was made to quickly understand how the system at all determines whether we started the preview or not. It was not possible to quickly get through the thorns of C ++ code, so I took the path of least resistance - I created a preview, but displayed it imperceptibly for the user. If you search for stackoverflow, then you can find another way - to transfer to SurfaceHolder created by setPreviewDisplay dynamically. And since the object is not added to the Activity markup, it will not be displayed. Unfortunately, this method only works for older versions of Android (up to 3.0, if I'm not mistaken). In new versions, the developers have corrected this misunderstanding.
Thus, we come to the only conclusion - we must somehow display the previews on the screen, the question now is whether it can be done imperceptibly? Fortunately, the answer is “yes you can.” And this is what is needed for this:
- Transparent Activity.
- FrameLayout size of 1 to 1 pixel in the upper left corner of our Activity.
Transparent Activity is done by one line of the manifest, for this we define it as follows:
<activity android:name=".activities.CameraActivity" android:exported="false" android:launchMode="singleTask" android:excludeFromRecents="true" android:theme="@android:style/Theme.Translucent.NoTitleBar" />
and create the following simple markup for it:
<?xml version="1.0" encoding="utf-8"?> <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/surfaceHolder" android:layout_width="1.0px" android:layout_height="1.0px" />
A SurfaceHolder object is created and added to the markup dynamically. In principle, it was possible to add it immediately to the markup, this moment was rendered into the code so as not to go into the markup if necessary to redefine the behavior of the object.
So, transparent Activity is, we create SurfaceHolder dynamically, what next? Then the main thing is to initialize the camera and take a photo. The idea here is to take a photo immediately at the start of the Activity and close it as soon as possible. We define our Activity as follows:
public class CameraActivity extends Activity implements Camera.PictureCallback, SurfaceHolder.Callback { private static final int NO_FRONT_CAMERA = -1; private Camera mCamera; private boolean mPreviewIsRunning = false; private boolean mIsTakingPicture = false; public class CameraPreview extends SurfaceView { public CameraPreview(Context context) { super(context); } } ...
Thus, events from SurfaceHolder (surfaceCreated, surfaceChanged, surfaceDestroyed) and Camera (onPictureTaken) will pour into it. The internal CameraPreview class is needed solely so that, as I noted above, quickly and painlessly make changes to the behavior of our SurfaceView if necessary. The following is a bunch of methods Activity
')
Some code @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.surface_holder); SurfaceView surfaceView = new CameraPreview(this); ((FrameLayout) findViewById(R.id.surfaceHolder)).addView(surfaceView); SurfaceHolder holder = surfaceView.getHolder(); if (Build.VERSION.SDK_INT < Build.VERSION_CODES.HONEYCOMB) holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); holder.addCallback(this); } @Override protected void onResume() { startPreview(); super.onResume(); } @Override protected void onPause() { stopPreview(); super.onPause(); } @Override public void surfaceCreated(SurfaceHolder surfaceHolder) { final int cameraId = getFrontCameraId(); if (cameraId != NO_FRONT_CAMERA) { try { mCamera = Camera.open(cameraId); Camera.Parameters parameters = mCamera.getParameters(); if (getResources().getConfiguration().orientation == Configuration.ORIENTATION_PORTRAIT) parameters.setRotation(270); List<String> flashModes = parameters.getSupportedFlashModes(); if (flashModes != null && flashModes.contains(Camera.Parameters.FLASH_MODE_OFF)) parameters.setFlashMode(Camera.Parameters.FLASH_MODE_OFF); List<String> whiteBalance = parameters.getSupportedWhiteBalance(); if (whiteBalance != null && whiteBalance.contains(Camera.Parameters.WHITE_BALANCE_AUTO)) parameters.setWhiteBalance(Camera.Parameters.WHITE_BALANCE_AUTO); List<String> focusModes = parameters.getSupportedFocusModes(); if (focusModes != null && focusModes.contains(Camera.Parameters.FOCUS_MODE_AUTO)) parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_AUTO); List<Camera.Size> sizes = parameters.getSupportedPictureSizes(); if (sizes != null && sizes.size() > 0) { Camera.Size size = sizes.get(0); parameters.setPictureSize(size.width, size.height); } List<Camera.Size> previewSizes = parameters.getSupportedPreviewSizes(); if (previewSizes != null) { Camera.Size previewSize = previewSizes.get(previewSizes.size() - 1); parameters.setPreviewSize(previewSize.width, previewSize.height); } mCamera.setParameters(parameters); if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR1) mCamera.enableShutterSound(false); } catch (RuntimeException e) { A.handleException(e, true); finish(); return; } } else { Log.e(Value.LOG_TAG, "Could not find front-facing camera"); finish(); return; } try { mCamera.setPreviewDisplay(surfaceHolder); } catch (IOException ioe) { A.handleException(ioe, true); finish(); } } @Override public void surfaceChanged(SurfaceHolder surfaceHolder, int format, int width, int height) { startPreview(); } @Override public void surfaceDestroyed(SurfaceHolder surfaceHolder) { releaseCamera(); } @Override public void onPictureTaken(byte[] bytes, Camera camera) { mIsTakingPicture = false; releaseCamera();
What is interesting about this code? Sign for the points.
- The most important thing is the procedure for calling methods. The documentation states that you need to call and in what order, but does not specify when. For example, the setPreviewDisplay method. If you initialize the camera and call this method immediately in onCreate or in onResume, the photo will not work. Then how to know when to call this method? The correct answer is from the comments to the setPreviewDisplay method in the source code. Here is a short excerpt from there:
The android.view.SurfaceHolder must already contain this surface when this method is called. If you are usingandroid.view.SurfaceView, android.view.SurfaceHolder.Callback withandroid.view.SurfaceHolder.addCallback (android.view.SurfaceHolder.Callback) android.view.SurfaceHolder) before calling setPreviewDisplay () or starting preview.
This method must be called before startPreview ().
- The second point is related to the life cycle of the SurfaceHolder object relative to the Activity. The life cycle of the Activity can be found in the documentation, but with SurfaceHolder, everything is not clear, so I had to figure it out empirically:
onCreate (Bundle savedInstanceState)
onResume ()
onPause ()
surfaceCreated (SurfaceHolder surfaceHolder)
surfaceChanged (SurfaceHolder surfaceHolder, int format, int width, int height)
onStop ()
surfaceDestroyed (SurfaceHolder surfaceHolder)
- The next interesting point is related to the order of calls to the Life cycle methods of the Activity. You may ask: “Why are all these checks in the spirit of if (mCamera! = Null) and the variables mPreviewIsRunning, mIsTakingPicture?”. Unfortunately, the only answer I can give in this case is that. And the point is that in some situations, the order of calls to the methods of the life cycle of an Activity may differ from that specified in the official docks (from this diagram, for example ). Most incidents occur when the screen lock is enabled on the phone. I had cases when the onStop method was called twice in a row, and after that, bypassing onStart, as if nothing had happened, onResume was called. In this case, the order of calling methods may differ on different devices, even despite the same version of Android on board. I have long tried to figure it out, to understand why this is happening. As a result, I just spent a lot of time on it and wrote the current implementation.
So, it is time to summarize what is happening. Here is what happens in the application:
- We start the Activity on the desired event (in my case - to turn on the screen).
- In onCreate, we create a SurfaceHolder and register an Activity to get callbacks.
- We are waiting for the surfaceCreated call and initialize the camera in it.
- After the camera is initialized, try to call takePicture. Since the order of calling methods strongly depends on the device, the OS version and the type of screen lock, we try using the onResume | surfaceChanged start a preview, and onPause stop it. At the same time onResume | onPause can happen both before and after surfaceCreated, so everywhere we check the camera for "initialization".
- The surfaceChanged method, according to the documentation, is guaranteed to be called at least once after the surfaceCreated, but theoretically it can be called as many times as necessary in the process of obtaining a photo. We add the mPreviewIsRunning variable in order not to accidentally start the preview several times. We start the preview, call takePicture, wait.
- We catch a photo in onPictureTaken. We release the camera, create AsyncTask to save the image, close the Activity.
Thus, the general order of calls is as follows:
onCreate (Bundle savedInstanceState)
onResume ()
onPause ()
surfaceCreated (SurfaceHolder surfaceHolder)
surfaceChanged (SurfaceHolder surfaceHolder, int format, int width, int height)
onPictureTaken (byte [] bytes, Camera camera)
onStop ()
surfaceDestroyed (SurfaceHolder surfaceHolder)
Conclusion
The application works and stably makes pictures on my phone (Nexus 4). In addition, he tested on other models, including the Motorola Droid RAZR and HTC Sensation. As I mentioned above, cameras work differently on different phones. On some phones, when you take a photo, you hear a shutter sound. On others, the photo is turned in the wrong direction and this is corrected only by editing EXIF. On some phones, and at all (I suppose, due to the peculiarities of the shell), the order of calling methods of the life cycle of an Activity may differ markedly. All this is connected not only with a huge number of device manufacturers on Android, but also with an incredible fragmentation of the OS itself (an interesting note on this subject can be found on page 57 of 1 of the Hacker magazine for 2014). Therefore, I would very much like to:
- Add profiles for different phone models and take a photo with this profile. For example, for phones that produce a shutter sound when photographing, add a mute immediately before the photograph.
- Thoroughly drive the application on a large set of test models and try to understand the reason for the difference in the call of the Activity methods.
- Dig deeper into the source code of Android. Finally, climb into the native part and figure out why takePicture can only be called after initializing the preview. Think how else you can deal with it.

This is all a matter of development in the near future.
Now the application is available on Google.Play in the current version. It is free, because the main goal in its creation was to explore the depths of Android. For those interested in the
link to google.play .
Thanks for attention!