📜 ⬆️ ⬇️

How we stopped being afraid of Ogre and started making a game on it

How to make mistakes. 2011


As the name implies, back in 2011 we made the third mistake, choosing Ogre3D as the basis for the game engine. The third, because the first mistake was the decision to make the game, and the second was to make it on your engine. Fortunately, these were the very mistakes that the fascinating story begins with. This is an adventure in which we have gone almost the entire way of the development of game engines, as the embryo goes through all stages of evolution.
Of course, like all novice developers, we had little idea what we were going to do and why. We were driven by the desire to tell your story, create your own fictional world, your universe, and on the wave of MMO popularity, the natural urge was to make your MMO with blackjack and everything owed. The urge happened back in 2010, and by 2011 the first version of dizdoc was ready. The earth was formless and empty, and darkness over the abyss, and the Spirit of Fallout, soared above us.



We went through trial and error, collecting all the jambs and rakes along the way. Like most projects, we started with the simplest. In terms of graphics (and I will only talk about the graphic part), the first version of the engine allowed using only a diffuse map and stencil shadows.

')
image

2011 One of the first screenshots


Technologically, the graphics then we strongly focused on games like Torchlight. But the soul demanded more, because in parallel with the development of the graphic part of the engine was an artistic search.

By the fall of 2012, in terms of graphics, we have grown to use normal maps and speculators. The influence of DOOM 3 was strong on immature minds of novice developers.

2012 To DOOM 3 as to Mars.


How to choose between a pipe and a jug. 2013


In the winter of 2013, the team has grown into a beautiful 3d-maker and a charming graphics programmer. The fantasies of the leading artist found a foothold, and the engine began to grow with graphic innovations. There was a gloss texture (it is also a power factor specular map, it is glossiness, it is shininess), cascading textural shadows, DoF (depth of field effect), RIM lighting and a bunch of glitches. During this period, communications problems of various specialists began to emerge especially clearly. The same things for developers with different backgrounds were called quite differently, and required repeated pronouncement.
Increasingly, hot battles began to be started on the trajectory of the development of the engine. In conditions of limited budget and time, one had to choose between programming the gameplay part and the visual part. So, RIM appeared as a compromise between the desire of the artist to see more obvious metal, the desire of the three-man to have reflections for this and the current capabilities of the engine. The question of switching to a ready-made engine began to get more acute: Unity3D became more and more functional and popular, rumors about human licensing schemes for UDK began to appear.



The beginning of 2013 The picture became a little more fun, but not much.



The end of 2013. The picture has become even more fun.


How to run into trouble. 2013


In the autumn of 2013. We first went on a kickstarter. Evidence of this sad experience flashed even on Habré. We covered the campaign during the first week, since it became obvious that it would not take off. MMOs at this point began to irritate the gamers, the next “WoW clone” (by no means it was not planned to play the game, but it wasn’t possible to convince gamers of this). As a work on the bugs, it was decided that we now make an RPG single with a cooperative passage.

The end of 2013. Screenshot from presentation scene ..


How to gain freedom. 2014


The fantasy of a leading artist required large and complex spaces. The realization of these fantasies and the leading 3D artist required the ability to operate not with five light sources, but with a much larger number of them.
The limit of 5 (actually 8, but the FPS sagged on the fifth) was due to the use of forward render (direct render)
Direct rendering is a standard rendering method that most engines use. Each object received by the video card passes the full rendering path. In this case, the vertex and pixel shaders are considered for each light source separately, even for those pixels that may later overlap with others. Each additional light source creates an additional iteration of calculations over the entire geometry. And with eight light sources in the scene, about 9 million triangles are drawn from 1 million visible triangles. This led to very low FPS at any difficult locations ...

The ghost of Kruzis with his hundreds of light bulbs kept me awake at night. It was decided to switch to deferred render (deferred render or deferred rendering). With a deferred render, a set of “final” images is formed without calculating the shadows: an image of color, an image of depth and an image of normals. Knowing the position of the light sources, the depth of the pixels and the normal, you can calculate the shading.
Compared to forward-rendering, we got a few goodies:
1) Increasing FPS due to the fact that geometry is rendered only once
2) Ability to work with multiple light sources: adding a new light source has little effect on performance.
3) Acceleration of some types of post-processing, more efficient implementation of soft particles and the ability to add screen-space reflection processing. Soft particles, as well as post-processing effects (DOF, SSR, SSAO) require depth and normal maps. Direct rendering does not give these maps, and they have to be rendered separately. With deferred lighting, these cards are served to us on a silver platter.

Disadvantages:
1) Translucency. Translucent objects can not be drawn in the deferred lighting, because It will require that one pixel of each texture (normal, diffuse, etc.) contain information about several overlapping objects. There are many different ways to solve the problem, but most often one is used - all translucent objects are rendered separately using direct rendering.
2) Aliasing. With deferred lighting, fsaa is disabled and all triangles are drawn with pronounced aliasing. Various methods are used to solve this problem, for example, FXAA
3) Increased memory bandwidth requirement of the video card.

We considered three options for the implementation of deferred lighting:
Option A:


The calculation of lighting is divided into two stages:
1. At the first stage, the whole opaque geometry is drawn in 4 textures. (diffuse, specular, glossiness, normal, depth map and glow map)
-diffuse, specular, glossiness are taken directly from the geometry textures,
-Normals are extracted from the normal map stretched on the geometry and converted into the coordinates of the camera space.
- The glow card is obtained by adding self-irradiation maps, RIM lighting, ambient diffuse lighting + weak backlight from the camera side.
2. Using the normal, depth, diffuse, specular and gloss maps, the diffuse and specular luminescence of each point for each light source is calculated and incrementally added to the luminance map. In the same pass, each point on the screen is checked into shadow maps.

This is a standard option for deferred lighting, implemented in most games.

Option B:


Calculation of lighting is divided into 3 stages:
1. At the first, all opaque geometry is rendered in 1 texture (normal, depth and gloss maps are rendered)
2. Two maps are rendered according to the normal, depth and gloss maps: diffuse and specular lighting for each light source.
3. At the third stage, the final image is rendered: the whole opaque geometry is drawn once again, but the illumination of each point is calculated with diffusion and specular by multiplying the diffuse illumination by the diffuse card + the specular illumination product by the specular card + rome illumination + self-emitting map + light from cameras.

The advantages of this method are:
1) Fewer video card bandwidth requirements.
2) Less computational operations for each light source, since some operations go from the second stage to the third.
3) The third stage can already be released with fsaa on, which increases the picture quality.

The disadvantage of this method is to render all geometry once or twice. However, the second time you can render on the ready z-buffer, prepared at the first stage.

Option B:




Calculation of lighting is divided into 3 stages:
1. At the first stage, all opaque geometry is drawn in 4 textures (diffuse, specular, glossiness, normal, depth map and glow map (as the sum of self-irradiation maps added, RIM illumination, ambient diffuse illumination + weak backlight from the camera)).
2. Two maps are rendered according to the normal, depth and gloss maps: diffuse and specular lighting for each light source.
3. At the third stage, the final image is rendered: the illumination of each point is calculated by multiplying the diffuse illumination by the diffuse map + the specular illumination product by the specular map of the self-irradiation map ...

This option is a mixture of the first two options. Compared with option A, we get a gain in speed due to the peculiarity of option B: instead of four samples from the texture, there is only one.

Now we have implemented the first version of the postponed meeting. All channels that are used to draw the final image can be displayed in debug mode.



After rendering the final image post-processing is done: here we have the effect of depth of field and color correction is done. At the same stage, reflections are drawn using the SSR technology.

SSR (Screen Space Reflection) is an algoritm of creating realistic reflections in a scene using data that has already been rendered on the screen. Briefly: from the camera the beam starts up to intersection with the scene. Using the normal at the intersection point, the reflection is considered. Along this ray of reflection, a depth map is traced until, until it hits any geometry, the luminosity of the found point is taken as a result and multiplied by the specular of the reflecting point and recorded into the luminosity of the reflecting point.
Two Screen Space Reflections algorithms are now implemented:
1) Tracing occurs in camera coordinates - slow, but giving the correct picture.
2) Tracing occurs in texture coordinates - fast but gives an error at low angles.


2014 Presentation diorama with reflections included.



2014 Presentation diorama with reflections included.


We use our game and network engines. Graphics engine - Ogre3D, Physical - Bullet. Scripting: Lua, C # (Mono). During development, we had to finish Ogre3D heavily and debug its bundle with Blender ... There are plans to contact Ogra developers and offer to include our improvements in the following Ogra builds.


Used programming languages: C ++, PHP, Lua, C #, Python, Java, Groovy, Cg, GLSL, HLSL.

Source: https://habr.com/ru/post/228139/


All Articles