📜 ⬆️ ⬇️

How 64k intro is created today: immersion immersion

image


Last December, we finally completed our project. This video shows our latest work - the four-minute animation "Immersion" . More precisely, this is a record of what is usually called a 64k intro . But more on this later.


Work on the project took the best available hours of the last two years of life. It all started during Revision 2015 , a large event held every year in Germany during the Easter holidays. The two of us chatted on the way from the hotel to the venue of the event. The previous evening, the level of competition in the 64kB intro turned out to be high. Very high. The experienced and well-known Hungarian group Conspiracy has finally returned with a serious, awesome job . Our best enemy, Approximate, was ideally on time with the completion of the release cycle and showed significant improvements in storytelling . The productive group Mercury has found its own mature design style in the intro, which left no doubt in its victory.

That year we came empty-handed and did not participate in competitions, but of course we wanted to return as soon as possible. However, after the demonstration of these high-quality intro, we wondered: beautiful graphics, excellent plot, wonderful design - how can we get to this level? I could not come up with a concept that, even with perfect implementation, would be able to defeat all these three competitors. Not to mention that our technical skills were lower than those of each group. And so we walked along the Hohenzollernshtrasse, exchanging ideas, until one of them “shot”. City growing out of the sea. This concept, if correctly implemented, could possibly compete at the level reached by the intro subculture. Revision 2016 , get ready, we go!
')
Revision 2016 rushed past us; Will we be able to get to Revision 2017 ? Alas, we did not cope with this new deadline. When we were asked at the event, how things were going, we answered evasively: “The first part took us a year. I am sure that we will be able to do the second part in 24 hours . ” But we did not have time. Nevertheless, we released a release, but the second part was done in a hurry, and it was noticeable . So much so that we could not even get close to the scene with the winners. We continued to work, and invested all the necessary love, and then, finally, we released the final version shown above.

What is a 64k intro?


Demo - this is the creation of digital art, standing at the intersection of short films, music videos and video games. Although they are non-interactive and often depend on music, like video clips, they are rendered in real time like video games.

64-kilobyte intro, or 64k for brevity, looks like a demo, but they add a size limit: intro should completely fit into one binary file of no more than 65536 bytes . No additional resources, no network, no extra libraries: the usual requirement is that it can be run on a PC with just installed Windows and the latest drivers.

How big is this volume? Here are your reference points for comparison.

In a file of 64KB size you can store:


64kB screenshot

JPEG image is 65595 bytes in size, 59 bytes more than the 64K limit.

Yes, that's right: this video, shown at the beginning of the post, fully fits into one file, which takes up less space than the screenshot of the video itself .

When you see such numbers, it seems difficult to fit into a binary file all the images and sounds that should be exactly necessary. We have already talked about some of the compromises that we had to make, and about some of the tricks we used to fit everything into such a small size. But this is not enough.

In fact, due to such extreme restrictions it is impossible to use conventional techniques and tools. We wrote our own toolchain - this task is interesting in itself: we created textures, 3D models, animations, camera paths, music, etc. thanks to algorithms, procedural generation and compression. Soon we will tell about it.

Some of the numbers


This is what we spent 64KB available for us:



This graph shows how 64KB is occupied by different types of content after compression.


This graph shows the change in binary size (not including about 2KB of the unpacker) before the final release.

Design and inspiration


Having agreed that the submerged city would be the central theme, we asked ourselves the first question: what should this city look like? Where is it located, why is it flooded, what is its architecture? One simple answer to all these questions: it could be the legendary lost city of Atlandida . This will also explain his appearance: at the behest of the gods (literally deus ex machina). At this we decided.


Early concept of flooded city. The artwork shown in the article was created by Benoît Molenda.

When making design decisions, we were guided by two books: Timaeus and Critias , in which Plato described Atlantis and her fate. In particular, in “Kritii” , he describes in detail the details of the structure of the city, its colors, the abundance of the precious Orihalka (which has become an essential element of the scene with the temple), the general form of the city and the main temple dedicated to Poseidon and Clayto. Since Plato apparently bases his descriptions on countries he knows, that is, a mixture of Greek, Egyptian and Babylonian styles, we decided to stick with this.



However, without sufficient knowledge of the subject matter, the creation of a convincing ancient architecture seemed difficult. Therefore, we decided to recreate the existing buildings:


The search for reference materials on the temple of Artemis (Artemision) turned out to be unexpected, enriching us with experience. At first we looked only for photos, charts or maps. But when we learned the name of John Turtle-Wood , everything gained more depth. Wood was the very person who was looking for the location of the temple and as a result found it. Hoping that a search by his name would give us more results than just Artemision, we immediately came across his book, written in 1877, in which he not only gives descriptions and sketches of the temple, but also describes his eleven-year journey to the lost monument, negotiations with the British Museum on financing, relations with local workers and diplomatic negotiations, without which it was impossible to carry out excavations in arbitrary locations.

These books were very important for us to make design decisions, but above all, their reading made us, as individuals, much more appreciate the work on the project.





And by the way, what should the roof look like? In some sketches, including Wood, there was a hole in it, in others it was missing; there is obviously some contradiction here. We decided to choose a model with an open roof, which will allow us to illuminate the interior of the castle with a beam of light. The illustrations shown above show the architectural plan and cross-sections of the building from the book Discoveries at Ephesus , which can be compared with our working model of the temple.

How to achieve the desired appearance


From the very beginning, we knew that the appearance of water would be crucial in this intro. Therefore, we spent a lot of time on it, starting with viewing reference materials to understand the essential elements of underwater graphics. As you can guess, we were inspired by James Cameron's The Abyss and Titanic , 3DMark 11 , and the lighting was studied in Ridley Scott 's Blade Runner .

To achieve the right feeling of being under water, it was not enough to implement and enable some epic function MakeBeautifulWater () . It was a combination of many effects that, if properly tuned, could convince us, the audience, of an illusion and make us feel that we were under water. But one mistake is enough to destroy the illusion; We learned this lesson too late, when in the comments after the initial release we were shown where the illusion disappears.



As can be seen in the illustrations, we also explored various unrealistic and sometimes excessive palettes, but did not know how to achieve such an appearance, so that we returned to the classical color scheme.

Water surface



Rendering the surface of the water involves reflection from a flat surface. Reflection and refraction are first rendered on separate textures using one camera located at the side, and the second located above the water plane. In the main pass, the surface of the water is rendered as a mesh with a material that combines reflection and refraction based on the normal vector and the view vector. The trick is to offset the coordinates of the textures based on the normal to the surface of the water in the screen space. This technique is classic and well documented.

On a medium scale, this works well, for example, during a scene with a boat, but on a large scale, for example, in the final scene with the appearance of a temple out of the water, the result looks artificial. To make it convincing, we used an artistic trick of applying Gaussian noise to intermediate textures. Blurring the texture of the refraction gives the water a gloomy look and a greater sense of depth. Blurring the reflection texture helps the sea look more agitated. In addition, the use of a larger blur in the vertical direction simulates the vertical traces that can be expected from the surface of the water.


Blurred image of the temple is reflected in the water surface.

Animation is performed using simple Gerstner waves in the vertex shader by adding eight waves with random directions and amplitude (in the specified range). Smaller details are performed in a fragment shader containing another 16 wave functions. An artificial backscatter effect based on the normal and height makes the wave tops brighter, visible in the image above as small turquoise spots. During the startup scene, several additional effects have been added, for example, a raindrop shader.


Shader illustration. Click on the image to go to the shader in Shadertoy.

Volumetric lighting


One of the first technical questions was “How to make the pillars of light immersed in water?” . Perhaps a translucent billboard with a beautiful shader? Once we started experimenting with naive ray marching through a medium. We were happy to see that even in the early rough rendering test, despite the poorly chosen colors and the lack of a decent phase function, the volumetric lighting immediately became convincing. At this point, we abandoned the original idea with a billboard and never returned to it.

Thanks to this simple technique, we got effects that we didn’t even dare to think about. When we added phase function and experimented with it, the sensations began to look like real ones. From a cinematic point of view, this opened up many possibilities. But the problem was speed.


The pillars of light give this scene a look that was inspired by Blade Runner.

It is time to turn this prototype into a real effect, so we read the tutorial of Sebastian Hillar , his presentation DICE and explored other approaches, for example, with epipolar coordinates . As a result, we settled on a simpler technique, similar to that used in the Killzone Shadow Fall ( video ) with some differences. The effect is performed by one full-screen shader at half resolution:

  1. For each pixel, a beam is emitted and its interactions with each cone of light are solved anatilically.

    Mathematical calculations are described here . From the point of view of performance, it will probably be more efficient to use the limiting mesh of the volume of light, but for 64KB we found it easier to apply an analytical approach. Obviously, the rays do not propagate further than the depth in the depth buffer.
  2. In the case of intersection of the beam for the volume inside the cone, ray marching is performed.

    The number of steps is limited by speed considerations, and random offsets are added to them to avoid stripes. This is a typical case of eliminating noise bands, less visually objectionable.
  3. At each step, a shadow map is obtained corresponding to the light, and the effect of the light is accumulated in accordance with the simple Henie-Greenstein phase function .

    In contrast to the approach based on epipolar coordinates, using this technique it is possible to have a heterogeneous density of the medium, which adds variability, but we did not realize this effect.
  4. The resolution of the resulting image is increased by using a two-pass bidirectional Gaussian filter and is added on top of the main rendering buffer. Unlike the technique from the Sebastian tutorial, we do not use temporary re-projection; we only use a sufficiently large number of steps to reduce visible artifacts (8 steps at low quality settings, 32 steps at high quality settings).


Volumetric lighting allows you to give the right mood and distinct cinematic appearance, which is difficult to implement in another way.

Light absorption


The instantly recognizable aspect of the underwater image is absorption. When the object is removed, it becomes less and less visible, its colors merge with the background until it disappears completely. Similarly, the volume affected by light sources also decreases, because the light is quickly absorbed by the aquatic environment.

This effect has great potential for creating a cinematic sensation, and it is very easy to model it. It is created in two stages in the shader. The first stage applies a simple absorption function to the brightness of light when light sources that affect an object are accumulated, thereby changing the color and brightness of the light when it reaches surfaces. The second stage applies the same absorption function to the final color of the object itself, thus changing the perceived color depending on the distance from the camera.

The code has approximately the following logic:

vec3 lightAbsorption = pow(mediumColor, vec3(mediumDensity * lightDistance)); vec3 lightIntensity = distanceAttenuation * lightColor * lightAbsorption; vec3 surfaceAbsorption = pow(mediumColor, vec3(mediumDensity * surfaceDistance)); vec3 surfaceColor = LightEquation(E, N, material) * lightIntensity * surfaceAbsorption; 

Light absorption test

Test the absorption of light in the aquatic environment. Notice how color affects the distance to the camera and the distance from light sources.

Adding vegetation


We definitely wanted to use algae. In the list of desirable typical elements of the underwater scenes, they were one of the first places, but their implementation seemed risky. Such organic elements can be difficult to implement, and incorrect implementation can destroy the feeling of immersion in the illusion. They should have a convincing form, be well integrated into the environment, and possibly require an additional model of subsurface scattering.

However, one day we felt that we were ready for the experiment. We started with a cube, scaled it and placed a random number of cubes in a spiral around an imaginary stem: from a sufficiently long distance, this could pass for a long plant with many small branches. After adding a lot of noise to deform the model, the algae began to look almost dignified.


Test frame with several rare plants.

However, when we tried to add these plants to the scene, we realized that with an increase in the number of objects, the speed quickly decreases. Therefore, we could add too few of them to make it look convincing. It seems that our new non-optimized engine has already stumbled upon the first “bottleneck”. Therefore, at the last minute we implemented a rough clipping along the visibility pyramid (the final version uses the correct clipping), which allowed us to show dense bushes in the demo.

With a suitable density and size (areas with a normal distribution), and when the details are hidden by dim lighting, the picture begins to look interesting. In further experiments, we tried to animate the algae: the noise function to modulate the power of an imaginary underwater flow, the inverse exponential function for the plants to bend, and the sinusoid for their tips to spin in the flow. In the process of experiments, we came across a real treasure: the emission of underwater light through the bushes , drawing shadow patterns on the seabed that disappear when away from the camera.


Vegetation casting shadow patterns onto the seabed.

Giving volume with particles


The final light touch is particles. Take a close look at any underwater survey, and you will notice all sorts of suspended particles. Stop paying attention to them, and they will be gone. We set up the particles so that they are barely visible and do not come across on the way. Nevertheless, they give a sense of volume, filled with a tangible medium, and help strengthen the illusion.

From a technical point of view, everything is quite simple: in the Immersion intro, particles are just copies of quadrangles with translucent material. We simply avoided the rendering order problem caused by translucency by setting the position along one axis according to the instance identifier. Because of this, all instances are always drawn along this axis in the correct order. For each frame, then you need to correctly orient the volume of particles. In fact, for many shots of this, we did not do this at all, because the size of the particles and the darkness of the scene made noticeable artifacts quite rare.


In this frame, the particles give an idea of ​​depth and a sense of density with increasing depth of immersion.

Music


How to fit high-quality music at about 16KB? This task is not new, and most 64k-intros written after the .the .product of 2000 use similar concepts. The original series of articles is quite old, but does not lose relevance: The Workings of FR-08's Sound System .

In short, the idea is that we need a musical score and a list of instruments. Each of the instruments is a function that generates sound procedurally (see, for example, subtractive synthesis and synthesis of physical modeling ). A musical score is a list of notes and effects used. It is stored in a format similar to midi, with some changes to reduce the size. Music generation takes place during program execution.

The synthesizer also has a plug-in version ( VSTi ) that a musician can use in their favorite music writing program. Having written music, the composer presses a button that exports all data to a file. We embed data in the demo.


When you run the demo, it starts the stream to generate music in a huge buffer. The synthesizer actively consumes CPU resources and is not necessarily executed in real time. Therefore, we run the stream before the demo starts, when textures and other data are generated.

Daniel Lindholm composed music with the help of a 64klang synthesizer created by Dominic Ries.

The working process


When creating a demo, one of the most critical aspects is the iteration time. In fact, this applies to many creative processes. The iteration time is the most important. The faster you can perform the iterations, the more you can experiment, the more variations you can explore, the more you can improve your vision and improve the quality as a whole. Therefore, we want to get rid of all obstacles, pauses and small friction in the process of creativity. Ideally, we want to be able to change anything at any time, instantly seeing the result and receiving continuous feedback in the process of making changes.

A possible solution used by many demo groups is to assemble the editor and create all the content inside it. We did not. Initially, we wanted to write code in C ++ and do everything inside Visual C ++. Over time, we developed several techniques that improved our workflow and reduced the iteration time.

Hot reload all data


If we could give only one piece of advice in this article, it would be like this: have all your data support a hot reboot. All data . Make it so that you can detect changes in data, load new data when this happens, and change the state of the program accordingly.

Little by little, we provided the possibility of a hot reload of all data: shaders, cameras, editing, all time-dependent curves, etc. In practice, we usually had an editor and an additional demo. When the file was modified, the changes instantly became visible in the demo.

In such a small project as a demo, it is quite simple to implement. Our engine keeps track of where the data comes from, and a small function regularly checks for changes in the time stamps of the corresponding files. When they change, it launches a reload of the corresponding data.

Such a system can be much more complex in large projects in which such changes are hampered by dependencies and a legacy structure. But the influence of this mechanism on the production process is difficult to overestimate, so it is completely worth the effort.

Custom values


Reloading the data is, of course, good, but what about the code itself? It is more and more difficult with him, so we had to solve this problem in stages.

The first step was a clever trick that allowed us to change the literals of the constants. Joel Davis described this in his post : a short macro that turns a constant into a variable with a code fragment that recognizes a change event in the source file and updates the variable accordingly. Obviously, this auxiliary code is missing from the final binary file and only a constant is left. Due to this, the compiler is able to perform all optimizations (for example, when the constant is set to 0).

This trick can not be used in all cases, but it is very simple and can be integrated into the code in a matter of minutes. Moreover, although it is meant that it should only change constants, it can also be used for debugging: to change code branches or enable / disable parameters with conditions like if (_TV (1)) .

C ++ Recompilation


Finally, the most recent step to ensure the flexibility of the code was the inclusion in the code base of the Runtime Compiled C ++ tool. It compiles the code as a dynamic library and loads it, and also performs serialization, which allows you to make changes to this code and monitor the results during execution, without the need to restart the program (in our case, the demo).

This tool is not perfect yet: its API is too embedded in the code and limits its structure (classes must be derived from the interface), and compiling and reloading the code still takes a few seconds. However, the ability to make changes to the logic of the code inside the demo and see the result provides wide creative freedom. At the moment, the advantages of the tool are used only for texture and mesh generators, but in the future we want to expand its work to the entire set of code dealing with content.

To be continued


Here ends the first part of what is intended as a series of articles devoted to the techniques used in the development of H - Immersion. We would like to thank Alan Wolf for reading the article; There are many interesting technical articles on his blog. In the following sections, we will discuss in more detail how textures and meshes are created.

Source: https://habr.com/ru/post/352970/


All Articles