📜 ⬆️ ⬇️

AR features in iOS and Android



The article most likely will not tell anything new to those who have long been engaged in the development of applications with the Augmented Reality chip, but perhaps it will be useful to those who are interested in this topic and are close to writing an AR-application.

Prehistory


A little lyrical digression - what AR is and why it has become so popular on mobile devices.
')
Once upon a time, probably as early as the beginning of the 80s of the last century, projectors began to be introduced into military fighters (for example, the Yak-41 - fighter of vertical takeoff), which projected information directly onto the frontal viewer glass. The pilots were very pleased with the innovation, because It was much more convenient to monitor the most important indicators in this way than to take a look at analog, and even digital, sensors.

This topic is not particularly relevant, but we for certain know the fact (the father of one of our employees participated in the development) that already in 1992, 3D aircraft models with illuminated components were spinning around in the Soviet Su-27. The graphics engine was written in assembler, on a 4 MHz processor (8086). It is indicative that the Americans did the same with 80486 with 66 MHz, so our code was always able to write.

Later the same HUD (Windshield Indicators) came to civil aviation, and in 1990 one of the Boeing engineers introduced the very concept of “Augmented Reality”.

Much later, when the accelerometer and the gyroscope came to smartphones, the bright head had the idea to connect them to the camera and OpenGL ES - so many games were born, navigation assistants, but most of the budget in this direction is spent on marketing and promotion applications. For example, by cutting out a paper frame for a watch from a magazine, putting it on hand and looking through the phone’s camera, the user can “try on” any brand of watch that advertises the magazine.

Now the purely technical part, those small problems that the programmer will face on the most popular mobile platforms.

iOS


The iOS versions for iPhone and iPad are very similar, although they differ in several ways. Unfortunately, such parameters include the fact that in iPad OS the window for displaying images from the camera (UIImagePicker) is a normal UIView, and in iPhone OS it is UIViewController. If everything is clear in the case of an iPad, we control it and put it like any other view, then in iPhone it is a bit more complicated - the ImagePicker window must be modal, and adding views over the camera is possible only using the cameraOverlayView parameter. Those. To add some 3D on top of the camera, you need to do the following:
imagePicker .cameraOverlayView = [[ARView new] autorelease]; 

Most likely, this is an anachronism, remaining from iOS 3 and below. What inconvenience does this cause? To the whole list:

In general, a bunch of crutches are always better when all the standard classes are views, and only the user controls the view controllers.

Android


With Android, things are a little different.

The camera preview, or rather SurfaceView, can be placed in a view of any size, and there is no need to create some kind of modal activation over everything. But without specific gestures has not done. It turned out that we ourselves have to find the appropriate size of the previews (the list of sizes is sometimes large, and may differ on devices from different manufacturers, even from one manufacturer). In search of the optimal resolution and proportion, you will have to sort through all the sizes from the possible options and compare them with the size and proportion of the twist, where we want to place this preview in runtime. The size of the preview will not always correspond to the size of the SurfaceView, so to maintain the proportions of the picture and get the appropriate size of the preview, you will have to do your ViewGroup, place the SurfaceView there, and do calculations, what and how to place in the onLayout method.

Another interesting thing is that if in iOS you want to draw a 3D model on top of the thumbnail from the camera, then you have a thumbnail (UIImagePicker) from below, and then draw any views from above, including with 3D models. In Android, they decided to do something in their own way - if standard UI elements can be safely drawn over the thumbnail (SurfaceView), then the 3D models in the GLSurfaceView should be placed under the (!) Thumbnail. In this case, you need to perform a series of gestures:

This is enough for AR to work, but problems with the static background for non-AR mode immediately arose. You see, the default GLSurfaceView is not transparent, you cannot display anything under it using the standard UI (neither the ImageVIew widget nor even the background of the GLSurfaceView itself works). But it can be made transparent with the help of the setZOrderOnTop (true) method - the GLSurfaceView becomes transparent, but it starts to appear on top of all the elements in the activation. It makes no difference whether they are below, above, or even in another view. So there is only one way out - if you need to draw something under the 3D model, and this is not a preview from the camera, then we will be helped by OpenGL ES. To do this, load the picture into memory as a texture, resizing it beforehand to get sides that are multiples of a power of two (on some GPUs it works without or with a performance drop; on some it doesn't work at all, so you have to do it). This texture will be displayed on a plane whose dimensions are equal to those of the port. We can only calculate the correct proportion of the texture, because There are many different screen sizes with different proportions.

for example


An example of what turns out, you can see in the AppStore and Google Play on the example of our application.

Source: https://habr.com/ru/post/145398/


All Articles