📜 ⬆️ ⬇️

Experimenting with AR: when C # meets CSS


Often, when working on a project, the greatest technical difficulties arise when you least expect it. In my case, this happened when I was working with Google Creative Lab on a prototype of an experiment to transfer the song of Grace Vanderwall to Moonlight in augmented reality. We liked the idea of ​​surrounding the viewer with a beautiful handwritten text of the song, which will unfold and float in space as we move along it.

Our AR-lyrics in the real world

I was a project coder; when I started the prototype, it seemed to me that the most difficult part would be AR. Locating and maintaining a stable position of objects in AR is a task for a huge amount of complex calculations, right? In fact, thanks to ARCore , who took on the main difficulties, everything turned out to be quite trivial. However, with the creation of an animated handwriting effect in 3D space, everything was not so simple.

In this post, I will talk about some tricky hacks that I used to solve the problem of animating our two-dimensional handwritten text in 3D space using the Unity LineRenderer component, which provides sufficient speed even with thousands of dots.

Concept


When we thought about possible musical experiments that could be conducted in AR, we were intrigued by the popularity of video clips with text . These videos with text, far from simple karaoke-style, look amazing, and for their production sometimes requires as much effort and aesthetic details as for other music videos. You may have seen a Katy Perry Roar clip with emoji, or a Taylor Swift video in the style of Saul Bass for the song Look What You Made Me Do.
')
We stumbled upon the text videos of Grace Vandervol and we liked their style. In such videos, her handwriting is often used, so we wondered if we could create a video with handwritten text, but at the same time embed it in AR? What if the song will write itself in parallel with singing around the viewer - but not images, but graceful, soaring, similar to three-dimensional lines?

We instantly fell in love with this concept and decided to try to implement it.

Getting points


To begin with, I analyzed the data that we would need to simulate the effect of writing the text by hand.

We didn’t want to just place the images with the text in the scene - we needed the text to give physical sensations when the viewer can walk around it and watch it. If we just placed 2D images in 3D space, they would disappear when viewed from the side.


Left and top right: PNG image in the Unity scene. It looks fine until you start moving around it. If you look at it from the side, it almost disappears. In addition, if you approach it, it may seem pixelated.
Left and right below: 3D point data used to draw a line in space. When viewed from the side, they retain a sense of volume. We can go around and go through them, and they will still appear as a 3D object.

So, no images: I will have to split the images with handwritten text into data points, which can be used to draw lines.

To make it all look like handwritten text, I also need to know in which order the points were drawn. When writing, the lines go back and cross each other, so the dots on the left do not have to appear before the dots on the right. But how to get this ordered data? We cannot extract them from the pixel image: of course, you can get the color values ​​of the pixels in the array (x, y), but this will not tell us what point should be painted first.


Behold: graphics programmers! Suppose we are trying to draw the word “hello” with the data, and the dotted line indicates the point we have reached at the moment. The example on the left shows what happens when we simply read the PNG data and draw the colored pixels on the left earlier than on the right. It looks very strange! Instead, we want to simulate the example on the right , for which we need the order in which the points were drawn.

However, vector data must consist of data from ordered points, so we cannot use PNG, but SVG data is fine.

But is Unity capable?

Point drawing


As far as I know, Unity does not have native support for extracting data points from SVG. Those who want to use SVG in Unity seem to have to rely on (often expensive) third-party assets with different levels of support. In addition, these resources seem to be more focused on SVG mapping, rather than on extracting data from points / contours of files. As mentioned above, we do not want to display SVG in space: we just need to get the data of points in an ordered array.

After thinking about this for some time, I realized that Unity does not support SVG, but it fully supports loading XML through several standard C # classes (for example, XmlDocument ). And SVG is really just an XML-based vector image format. Can I upload SVG data if I just change the extension from .svg to .xml?

Surprisingly, the answer was positive!

Therefore, we created such a workflow: our artists painted the text as contours in Illustrator, I simplified these contours to straight lines, exported this contour data as SVG, converted them to XML (literally simply changing the file extension from .svg to .xml) and without the slightest problem I loaded them into Unity.


Left: one of the SVGs created by our artists.
Right: XML data. Each curve is simplified to broken lines. Based on practical and aesthetic considerations, we decided that all the words of the text would begin to be drawn simultaneously.

I did this, and I was pleased to see that we can easily load data into Unity and transfer them to LineRenderer without any problems. Moreover, since LineRenderer has a billboard effect enabled by default, it looked like a line with a 3D volume. Hooray! Handwriting in AR! Problem solved, right? How to say…

Implementing handwriting animation


So, I managed to position the handwriting soaring in the air, and now I only (“only”) had to animate his writing.

In the first attempt to implement, I took LineRenderer and to create an animation effect, I wrote a script, gradually adding points. I was amazed when I saw how much the application was slowed down.

It turns out that adding points to a LineRenderer in real time is a very computationally expensive operation. I wanted to avoid parsing complex SVG contours, so I simplified the contour data into broken lines, but it took much more points to maintain the curvature of the contours. I had hundreds, and sometimes thousands of points of text, and Unity was not very happy with the fact that I was dynamically changing the LineRenderer data. Our target platform was mobile devices, so the braking was even more serious.

So, dynamic adding points in LineRenderer will not succeed. But how to achieve the effect of animation without it?

It is known that LineRenderer Unity is an unyielding component, and I, of course, could get rid of all this work by buying a third-party asset. But, as in the case of asset packages for SVG, many of them were a combination of expensive, over-complicated, or inappropriate solutions for our task. In addition, I, as a coder, was intrigued by this task, and I tried to solve it, just to get pleasure from the solution. It seemed to me that there should be a simple solution using components that I got for free.

I thought about the problem a bit, actively studying the forums on Unity. Every time my search ended in nothing. I banged my head against the wall, creating several incomplete solutions, until I realized that I had encountered this problem before, but in a completely different area.

Namely in CSS.

Animating points


I remembered reading about this issue a few years ago on Chris Wong’s blog , where he talked in detail about his decision to create the NYC Taxis: A Day In The Life . He animated a taxi moving around the map of Manhattan, but did not know how to get a taxi to leave a mark on the map (in SVG format).

However, he found that he could manipulate the stroke-dasharray parameter of the line to make this effect work. This parameter turns the solid line into a dotted line and essentially controls the length of the dotted lines and their corresponding spaces. A value of 0 means that there is no space between the dotted lines, that is, they look like a solid line. However, if you increase the value, the line will be divided into points and lines. Thanks to the implementation of clever transitions, he managed to animate the line without dynamically adding points.

Manipulating an array of dashes allows you to break a solid line into fragments of colored lines and spaces. The animation is taken from Jake Archibald ’s great interactive demo .

In addition to stroke-dasharray, CSS encoders can also manipulate the offset of a stroke-dashoffset contour . According to Jake Archibald, the stroke-dashoffset controls " where the first" dotted line "of the dotted line created by the stroke-dasharray begins along the contour ."

What does it mean? Suppose we change the stroke-dasharray so that the color dotted line and empty space are both stretched to the full length of the line. With a stroke-dashoffset of 0, our line will be colored. But with increasing bias, we shift the beginning of the line farther and farther along the contour, leaving empty space behind it. And we get an animated curve line!


If we increase the dasharray value to the maximum, then we can use the offset to make the line look animated. The animation is taken from Jake Archibald ’s great interactive demo .

However, it is obvious that in C # we do not have a stroke-dasharray or a stroke-dashoffset. But we can manipulate tiling and material displacement , which is used by our shader . Here we can apply the same principle: if we have a texture similar to a dotted curve, one part of which has color and the other is transparent, then we can manipulate tiling and texture displacement to smoothly move the texture along the line — that is, perform a transition from color lines to transparent completely without any manipulations with points!


My material is half color (white), half transparent. When manually changing the offset, it seems that the text is being written. (In the application, we manipulate the shader with a simple call to SetTextureOffset .)

This is exactly what we did! Knowing the time of the creation of the word, as well as the time for which it should be written, I was able to simply interpolate the offset linearly based on how close we are to the time of completion of the writing. And it does not require manipulation of the values ​​of points!

The speed and frame rate soared again to the skies, and we managed to see how the AR-text itself smoothly and elegantly writes itself in the real world.

In the real world! We started experimenting with z-indexes and text on different layers to give it a greater sense of being in space.

I hope you enjoyed this small excursion into the process of curbing the Unity component known for its intransigence in order to realize a beautiful and low-cost effect. Want to look at other examples of studying AR in art, dance and music? Watch the video of our trio . Other information about our experiments with AR can be found on our Experiments platform . Here you can try to work with ARCore. Successful linear interpolation!

Source: https://habr.com/ru/post/352738/


All Articles