
Hello! Recently, we realized that we had grown to a corporate blog on Habré, although we have been working in the development market for ten years. During this time, managed to fill a lot of cones and save up life experience. Our team now has enthusiasts who are ready to share their expertise and discuss their favorite topics with their colleagues here.
We plan to tell in our blog about:
')
- augmented reality;
- blockchain and everything connected with it;
- Highly loaded sites and services (our favorite section);
- mobile development;
- experience in creating IT products to order.
For the first post, we prepared an overview of some of the popular cross-platform AR frameworks that existed at the beginning of 2018.
Recently, orders for application development using augmented reality have increasingly begun to arrive. Therefore, we decided to review the currently existing AR frameworks.
Most of our clients' requests were related to image recognition (poster, cover, page, etc.) and the imposition of any description or accompanying video. Thus, we usually needed to: recognize the target image and render the object.
We tested the following AR frameworks:
Framework | Price |
Free | One time | Periodical |
Wikitude | Only trial | - | 2490/2990/4490 € per year |
EasyAR | + | $ 499 | - |
ARToolkit | Open source |
Vuforia | + | $ 499 | 99 $ per month |
Testing was performed on the Android platform.
Wikitude
This framework left the best impression, which is probably why it costs more than others. To begin with, they provide an online studio for the imposition of simple static objects of augmented reality. To do this, you need to load the target image into the studio, add AR objects, generate JavaScript code and insert it into your project. Thus, the entire rendering falls on the ArchitectView from the Wikitude JS SDK. Studio looks like this:

To place simple static objects, it is enough in our UI component to initialize ArchitectView with the developer's license key and pass the path to the environment generated by JS AR. In the simplest case, this is all that is needed to recognize Image targets and overlay augmented reality. In this case, Wikitude JS Android SDK takes over all the work related recognition and rendering.
What about the native?
If necessary, you can transfer json objects from native JS code to native code. To do this, you need to implement ArchitectJavaScriptInterfaceListener and add a listener to ArchitectView.
But, what to do if we want to take on rendering and add dynamics to our AR or somehow customize the behavior? To do this, you can write your JS code using the Wikitude JS SDK or, if we need performance and full control over rendering, then Wikitude provides the Native SDK.
To work with this extension, we will need to pass to wikitudeSdk an implementation of the ImageTrackerListener interface, which contains object recognition callbacks, losses, etc. and a link to our custom rendering (this can be InternalRendering or Externalrendering). In essence, this means that in the internal implementation of the GLSurfaceView is provided by the Wikitude SDK, and in the external, all openGL falls on the shoulders of the developer. In this case, the studio is used only for generating the wtc base of target images, which we then track.
Also, this framework supports work with unity and integration of C ++ plug-ins.

EasyAR
EasyAR, unfortunately, does not provide any tools that make life easier for the developer. All we have is the SDK, instructions for running their examples, a little documentation describing the basic principles of object recognition and documentation for C ++, its principle is enough to become familiar with the SDK classes, because the whole environment is a wrapper over the pluses.
The target objects are pictures and their description in json format. To work, we need CameraDevice to provide access to the camera, CameraFrameStreamer to transfer data from the camera to the tracker and the ImageTracker itself. To track targets, we continuously poll tracker status.
In the provided example, they have a certain entity, let's call it AREnvironment, which contains the initialization of the SDK and trackers, the work with the camera, the initialization of OpenGL. Also this class describes what and where to render (it turned out a certain God object, in which most of the work is mixed). This AR environment aggregates the GLView, GLView in turn is called from the activation in onResume and onPause:
public class GLView extends GLSurfaceView { private com.example.developer.easyartest.AREnvironment AREnvironment; public GLView(Activity activity) { super(activity); … this.setRenderer(new GLSurfaceView.Renderer() { @Override public void onSurfaceCreated(GL10 gl, EGLConfig config) { synchronized (AREnvironment) { AREnvironment.initGL(); } } @Override public void onSurfaceChanged(GL10 gl, int w, int h) { synchronized (AREnvironment) { AREnvironment.resizeGL(w, h); } } @Override public void onDrawFrame(GL10 gl) { synchronized (AREnvironment) { AREnvironment.render(); } } }); } ... }
The documentation contains only basic principles and the specification of classes. There are currently no comprehensive full-length guides, so in order to figure out how to integrate the framework into your project, you need to understand their examples. In the examples, the code is jumbled: the rendering depends on the standard OpenGL and on their internal data structures, and it is also tightly tied to the target images and the camera.
ARToolkit
I wanted to look at existing open source projects, after googling, the choice fell on ARToolkit. I downloaded their examples and the first thing that surprised me was the version of the collector:
classpath 'com.android.tools.build:gradle-experimental Race.2.1'.
Experimental version of hailst while the last stable release of the library was in March 2016? It looks a bit strange, maybe they just didn’t update the samples. OK, we try to bring down the project, falls. Probably, you need to delve into the hail, but did not allocate time for it, after all, much was not expected from open source. If you run diagonally across the project code, it becomes clear that all work with AR occurs in C ++ code, and in the native - rendering.
This solution is probably usable, but it did not fit us due to the higher threshold of entry. However, if you have a desire to look at the pros or to consider the framework from the inside, then study it in more detail.
Vuforia
Vuforia, like Wikitude, is a pretty powerful tool. They also have an online studio, but it can only generate a database of target images and show the points by which recognition occurs:

But unlike Wikitude, you can load as targets here: an image, a cuboid, a cylinder or a 3D object.
In general, their examples are similar to those of EasyAR, but there is a separation between rendering, AR environments, and application components. Therefore, it is easier to understand their code, and it is not difficult to cut out any components and immediately use them in your project without going into the implementation details. In addition, this framework has more extensive documentation in comparison with EasyAR, but they generalized it for all platforms and made examples on a similar pseudocode, which Wikitude did not have (those guys confused on full-fledged guides for each platform). Therefore, samples to help.
Arcore
It should be mentioned about ARCore. Google immediately warns that at the moment it is a preview version: ARCore is currently in preview. There might be breaking changes before 1.0 release.
The framework currently supports motion tracking, environmental awareness and light assessment. Rendering will be done through OpenGL or unity. Examples can be downloaded from the official site developers.google.com/ar, but so far the range of devices on which ARCore can be used is rather limited. As a commercial use, this solution is too early to consider.
Let's sum up:

Thus, if you want to do something simple with a minimal entry threshold, choose Wikitude, if you plan to render objects yourself, Vuforia or EasyAR will be cheaper, and if you want to dig at a lower level and understand how the framework works. inside - ARToolkit is a good option.
Note that the solutions were considered primarily for target recognition and AR overlaying. In this work, many other framework functions were not considered: SLAM, cloud, etc. For more information, see the official sites.
Resources:
www.wikitude.comwww.artoolkit.orgwww.easyar.comwww.vuforia.comdevelopers.google.com/ar