📜 ⬆️ ⬇️

The battlefield is augmented reality. Part III: engine capabilities, animation and POI

In the last two articles, we became acquainted with the basics of the process and began to sort out the case for working with AR. Read to stay up to date.

Part I: The Basics of Object Recognition
Part II: how to recognize the object and show the 3D model

We are pleased to present the final third part (yukh!), In which we will discuss the engine's capabilities in more detail and find out:
')

A few words about 3D models and animation


We have already seen an application for downloading images for recognition. So, Wikitude has an editor for loading models and animations. The tool is called the Wikitude 3D encoder.
With it, you can download content from files of the type * .fbx or * .dae, and subsequently export it in a format friendly for Wikitude * .wt3.

image
3D model in the Wikitude 3D encoder editor

We have already disassembled the example with the 3D model, let's talk about animation. Animation can be exported from special software, or programmed independently. In the first case, the Autodesk maya program was used for this.

image
An example of animation right in the editor. Notice that all the available animations appeared on the right side.

A bit of code never hurts:

// ModelAnimation
this.animationDoorL = new AR.ModelAnimation(this.model, "DoorOpenL_animation");
this.animationDoorR = new AR.ModelAnimation(this.model, "DoorOpenR_animation");
this.animationEngine= new AR.ModelAnimation(this.model, "EngineWindow_animation");
this.animationHood = new AR.ModelAnimation(this.model, "Trunkopen_animation");
//
this.model.onClick = function( drawable, model_part ) {
switch(model_part)
{
case 'WindFL':
case 'DoorL[0]':
case 'DoorL[1]':
case 'DoorL[2]':
case 'DoorL[3]':
World.animationDoorL.start();
break;
case 'WindFR':
case 'DoorR[0]':
case 'DoorR[1]':
case 'DoorR[2]':
case 'DoorR[3]':
World.animationDoorR.start();
break;
case 'Rear[0]':
case 'Rear[1]':
case 'WindR1[0]':
case 'WindR1[1]':
World.animationEngine.start();
break;
case 'Hood':
World.animationHood.start();
break;
}
}
view raw Animation.js hosted with ❤ by GitHub

This will start the animation. The original animation itself is embedded in the file type * .wt3
Diversify the task: let's program the animation. Let's make the model appear with animation when the picture is recognized.

createAppearingAnimation: function createAppearingAnimationFn(model, scale) {
/**
* AR.PropertyAnimation 0 .
* ‘easing curve’
*
* .
*/
var sx = new AR.PropertyAnimation(model, "scale.x", 0, scale, 1500, {
type: AR.CONST.EASING_CURVE_TYPE.EASE_OUT_ELASTIC
});
var sy = new AR.PropertyAnimation(model, "scale.y", 0, scale, 1500, {
type: AR.CONST.EASING_CURVE_TYPE.EASE_OUT_ELASTIC
});
var sz = new AR.PropertyAnimation(model, "scale.z", 0, scale, 1500, {
type: AR.CONST.EASING_CURVE_TYPE.EASE_OUT_ELASTIC
});
return new AR.AnimationGroup(AR.CONST.ANIMATION_GROUP_TYPE.PARALLEL, [sx, sy, sz]);
},
//
this.appearingAnimation = this.createAppearingAnimation(this.model, 0.045);
...
//
loadingStep: function loadingStepFn() {
if (World.resourcesLoaded && World.model.isLoaded()) {
if ( World.trackableVisible && !World.appearingAnimation.isRunning() ) {
World.appearingAnimation.start();
}
}
},

It remains to slightly edit the initialization functions of the tracker and we will get this example:

var World = {
loaded: false,
trackableVisible: false,
resourcesLoaded: false,
init: function initFn() {
this.createOverlays();
},
createOverlays: function createOverlaysFn() {
this.targetCollectionResource = new AR.TargetCollectionResource("assets/tracker.wtc", {
onLoaded: function () {
World.resourcesLoaded = true;
this.loadingStep;
},
onError: function(errorMessage) {
alert(errorMessage);
}
});
this.tracker = new AR.ImageTracker(this.targetCollectionResource, {
onTargetsLoaded: this.loadingStep,
onError: function(errorMessage) {
alert(errorMessage);
}
});
this.model = new AR.Model("assets/car_animated.wt3", {
onLoaded: this.loadingStep,
scale: {
x: 0,
y: 0,
z: 0
},
translate: {
x: 0.0,
y: 0.05,
z: 0.0
},
rotate: {
z: -25
}
} );
this.animationDoorL = new AR.ModelAnimation(this.model, "DoorOpenL_animation");
this.animationDoorR = new AR.ModelAnimation(this.model, "DoorOpenR_animation");
this.animationEngine= new AR.ModelAnimation(this.model, "EngineWindow_animation");
this.animationHood = new AR.ModelAnimation(this.model, "Trunkopen_animation");
this.model.onClick = function( drawable, model_part ) {
switch(model_part)
{
case 'WindFL':
case 'DoorL[0]':
case 'DoorL[1]':
case 'DoorL[2]':
case 'DoorL[3]':
World.animationDoorL.start();
break;
case 'WindFR':
case 'DoorR[0]':
case 'DoorR[1]':
case 'DoorR[2]':
case 'DoorR[3]':
World.animationDoorR.start();
break;
case 'Rear[0]':
case 'Rear[1]':
case 'WindR1[0]':
case 'WindR1[1]':
World.animationEngine.start();
break;
case 'Hood':
World.animationHood.start();
break;
}
}
this.appearingAnimation = this.createAppearingAnimation(this.model, 0.045);
var trackable = new AR.ImageTrackable(this.tracker, "*", {
drawables: {
cam: [this.model]
},
onImageRecognized: this.appear,
onImageLost: this.disappear,
onError: function(errorMessage) {
alert(errorMessage);
}
});
},
removeLoadingBar: function() {
if (!World.loaded) {
var e = document.getElementById('loadingMessage');
e.parentElement.removeChild(e);
World.loaded = true;
}
},
loadingStep: function loadingStepFn() {
if (World.resourcesLoaded && World.model.isLoaded()) {
if ( World.trackableVisible && !World.appearingAnimation.isRunning() ) {
World.appearingAnimation.start();
}
var cssDivLeft = " style='display: table-cell;vertical-align: middle; text-align: right; width: 50%; padding-right: 15px;'";
var cssDivRight = " style='display: table-cell;vertical-align: middle; text-align: left;'";
document.getElementById('loadingMessage').innerHTML =
"<div" + cssDivLeft + ">Scan CarAd Tracker Image:</div>" +
"<div" + cssDivRight + "><img src='assets/car.png'></img></div>";
}
},
createAppearingAnimation: function createAppearingAnimationFn(model, scale) {
var sx = new AR.PropertyAnimation(model, "scale.x", 0, scale, 1500, {
type: AR.CONST.EASING_CURVE_TYPE.EASE_OUT_ELASTIC
});
var sy = new AR.PropertyAnimation(model, "scale.y", 0, scale, 1500, {
type: AR.CONST.EASING_CURVE_TYPE.EASE_OUT_ELASTIC
});
var sz = new AR.PropertyAnimation(model, "scale.z", 0, scale, 1500, {
type: AR.CONST.EASING_CURVE_TYPE.EASE_OUT_ELASTIC
});
return new AR.AnimationGroup(AR.CONST.ANIMATION_GROUP_TYPE.PARALLEL, [sx, sy, sz]);
},
appear: function appearFn() {
World.removeLoadingBar();
World.trackableVisible = true;
if ( World.loaded ) {
// Resets the properties to the initial values.
World.resetModel();
World.appearingAnimation.start();
}
},
disappear: function disappearFn() {
World.trackableVisible = false;
},
resetModel: function resetModelFn() {
World.model.rotate.z = -25;
},
};
World.init();
view raw full_animation.js hosted with ❤ by GitHub

We considered two approaches to animation:

- Created earlier and put into a * .wt3 file
- Manually programmed

How to display the caption


The most frequent case is to display the text. It is quite simple. So:

modelAsset = new AR.Model("assets/car.wt3", {
onLoaded: this.onModelLoaded,
onClick: this.onModelClick,
scale: {
x: 0.07,
y: 0.07,
z: 0.07
},
onModelLoaded: function onModelLoadedFn() {
var rotateAnimation = new AR.PropertyAnimation(modelAsset, 'rotate.heading', 0, 360, 1500);
rotateAnimation.start(-1);
},
onModelClick: function onModelClickFn() {
var e = document.getElementById('loadingMessage');
e.innerHTML = " 10 !";
},
// index.html
<div>
<div class="loadingMessage" id="loadingMessage">Loading ...</div>
</div>
view raw show_text.js hosted with ❤ by GitHub

Let's look at what we did:


As you can see, we have a loadable model. When loading, it is assigned an animation that twists the model around its axis.

When you click on this model, we have an inscription in the window. In the window there is also an animation button. It appears only when the object is recognized and animates the car in its own way.

Geolocation and POI


POI - points of interests, object marked with a pin.

In our case, POI means an object that is too far from us. When we are with him within a radius of 100 meters, he ceases to be for us a point in the world, and is transformed into a model inside the add. reality and we get the opportunity to interact with it.

The question arises: why not use the model immediately?

As you saw in the game Pokemon go, Pokemon appear in front of you at a certain point. It turns out that they are not tied to their geo-location, they are tied to your geo-position and you are for them the center of the universe (the point 0.0.0) from which it is read.

This is due to several points, the most obvious - the GPS sensor, which indicates the objects by geo. He is thrown 50 meters back and forth. This was especially noticeable on the map when you were first on one side of the house, and then suddenly on the other.

The second nuance is that we display the POI as a 2D image and its size does not depend on the distance to the point. Otherwise, when working with 3D, it is necessary to calculate the dimensions and dynamically change as they approach, which in my opinion, makes work much more difficult.

First of all, launch the geo tracking sensor:

// location updates, fired every time you call architectView.setLocation() in native environment
locationChanged: function locationChangedFn(lat, lon, alt, acc) {
if (!World.initiallyLoadedData) {
var indicatorImage = new AR.ImageResource("assets/indi.png");
World.indicatorDrawable = new AR.ImageDrawable(indicatorImage, 0.1, {
verticalAnchor: AR.CONST.VERTICAL_ANCHOR.TOP
});
World.targetLocation = new AR.GeoLocation(59.000573, 30.334724, AR.CONST.UNKNOWN_ALTITUDE);
World.loadPoisFromJsonData();
World.createModelAtLocation();
World.initiallyLoadedData = true;
}
// store user's current location in World.userLocation, so you always know where user is
World.userLocation = {
'latitude': lat,
'longitude': lon,
'altitude': alt,
'accuracy': acc
};
if (World.targetLocation)
{
World.stateOnDistance();
var latDirection = World.targetLocation.latitude - World.userLocation.latitude;
var lonDirection = World.targetLocation.longitude - World.userLocation.longitude;
}
},
World.init();
AR.context.onLocationChanged = World.locationChanged;
view raw geo_detection.js hosted with ❤ by GitHub

Next, we need to do a few things:

- Add point of interest. We do this in the loadPoisFromJsonData method
- Add 3D model. Above we have already done it. Now we just wrap the createModelAtLocation () function and add a trigger + attach it to the geo.
- Track the distance to a given point and take appropriate action.
// inject poi
loadPoisFromJsonData: function loadPoisFromJsonDataFn() {
var markerAsset = new AR.ImageResource("assets/marker_idle.png");
var markerImageDrawable_idle = new AR.ImageDrawable(markerAsset, 2.5, {
zOrder: 0,
opacity: 1.0
});
// geo object . ,
World.targetPOIObject = new AR.GeoObject(World.targetLocation, {
drawables: {
cam: [markerImageDrawable_idle],
indicator: [World.indicatorDrawable]
}
});
},
createModelAtLocation: function createModelAtLocationFn() {
modelAsset = new AR.Model("assets/car.wt3", {
onLoaded: this.onModelLoaded,
onClick: this.onModelClick,
scale: {
x: 0.6,
y: 0.6,
z: 0.6
},
});
//
var relativeLoc = new AR.RelativeLocation(null, 4, 0, -4);
World.targetGeoObject = new AR.GeoObject(relativeLoc, {
drawables: {
cam: [modelAsset],
indicator: [World.indicatorDrawable]
}
});
},
// , , , poi
stateOnDistance: function stateOnDistanceFn() {
var distance = World.targetLocation.distanceToUser();
var e = document.getElementById('loadingMessage');
if (distance < 100)
{
if (World.targetGeoObject != true)
{
World.targetGeoObject.enabled = true;
World.targetPOIObject.enabled = false;
e.innerHTML = " ! !";
}
}
else
{
if (World.targetPOIObject != true)
{
World.targetGeoObject.enabled = false;
World.targetPOIObject.enabled = true;
}
e.innerHTML = distance + " ";
}
},
var relativeLoc = new AR.RelativeLocation(null, 4, 0, -4);
World.targetGeoObject = new AR.GeoObject(relativeLoc, {
drawables: {
cam: [modelAsset],
indicator: [World.indicatorDrawable]
}
});
},
view raw main_detection.js hosted with ❤ by GitHub

image
At a distance from us we see POI

image
Zoom in on the object and see a 3D model with a constant rotation animation.

Let's summarize our work on the case. Here is what we got:

- Pattern recognition.
- Overlay animation on the 3D model in the place of recognition.
- A 3D model that is far away from us by geography and is reflected in the form of POI.
- Transformation of POI in a 3D model when approaching.
- Processing clicks, display labels.
- 3D model instead of geo-referencing is attached to our geo-location and is located relative to us.

At the same time, we absolutely had no problems with loading the image for recognition or loading the 3D model / animation. The whole process is intuitive and simple.

I hope you can find a useful application for everything I wrote. I understand that this is not the only option. Therefore, I will be glad to your suggestions in the comments.

All augmented reality, peace and chewing gum!

PS
Initially, I did not even dare to think that the topic would stretch into three articles. The paradox is that this is not all. This area is extremely interesting and difficult. Just yesterday, Google announced the closure of Google Tango, because it was no longer necessary, because ARCore and so wonderful copes with the task. Well, what's going on in the market of neural networks here and can not say.

image

Author: Vitaly Zarubin, Senior Development Engineer Reksoft

Source: https://habr.com/ru/post/345062/


All Articles