Total Annihilation holds a special place in my heart because it was my first RTS; Together with
Command & Conquer and
Starcraft, this is one of the very best RTSs released in the second half of the 90s.
Ten years later, in 2007, its successor was released:
Supreme Commander . Due to the fact that one of the main creators of Total Annihilation (designer
Chris Taylor , engine programmer
Jonathan Mayor and composer
Jeremy Soul ) worked on the game, the expectations of the fans were very high.
Supreme Commander was warmly received by critics and players thanks to its interesting features, such as “strategic zoom” and physically realistic ballistics.
')
Let's see how the SupCom engine called Moho renders the frame of the game.
RenderDoc does not support games for DirectX 9, so reverse engineering was done using the good old PIX.
Relief structure
Before delving into the issue of frame rendering, it is important to first talk about how the relief is created in SupCom and what technique is used.
Here is the 1 for 1 Finn's Revenge fighting card. This is a top view of the whole map, it looks like this in the game on the minimap:
Below is the same map from a different angle:

First, the geometry of the terrain is calculated using
a height map . The elevation map describes the elevation of the terrain. White color indicates a high level, and dark - a lower. For our map, a single-channel image with the size of 513x513 is used, it represents 10x10 km in the game. SupCom supports much larger maps up to 81x81 km.

So, we have a mesh representing relief. Then the game imposes an albedo texture combined with a normal texture to cover all of these polygons. For each map, the sea level is also indicated, so the game modulates the color of the albedo of pixels below the surface of the sea, giving them a blue tint.
(Translator's note: more clearly thanks to the animation, the changes here and below are visible in the original article .)Well, well, texturing with reference to height is not bad, but it quickly exhausts its limits.
How can I add more details and variations to the map?
Here we use a technique called
Texture splatting : the game draws sets of additional textures albedo + normals. Each stage adds a new “layer” to the relief.
We already have layer 0: relief with original albedo + color textures.
To use the new layer, we need additional information: the weights map telling us where to draw new albedo + normals, and more importantly, where not to draw them! Without such a scale map, also called an alpha map, when using a new layer, we completely override our previous layer. When applied to the mesh, the albedo and normal textures have their own scaling factor.
So, we applied layers 1, 2, 3 and 4, each of which is based on 3 separate textures. Albedo and normal textures use 3 channels (RGB) each, and the weights map uses only one. Therefore, to optimize, 4 weight maps are combined into a single RGBA texture.

Great, we got more texture variations for the terrain. It looks good from afar, but if you zoom in, you'll quickly notice a lack of high-resolution details.
Therefore, decals come into play: these are small sprites that locally change the albedo color and pixel normal. There are 861 copies of 21 unique decals on this relief.
So much better already, but what about vegetation? The next step is to add to the relief of what the engine calls Props: models of trees or stones. There are 6026 copies of 23 unique models on this card.
And now the final touch: the surface of the sea. This is a combination of several normal maps with scrolling UV scans in different directions, an environment map for reflections and sprites for waves on the coastline.
After that, the relief is ready. Creating good height maps and scales can be a problem for map designers, but, fortunately, there are tools to help with this work: there is an official map editor, the Supcom Map Editor and
World Machine, with even more features.
So, now you know the theory of development of the SupCom relief, let's move on to the very frame of the game.
Frame breakdown
Here is the frame that we will analyze:

Clipping on the pyramid of visibility
The game stores in RAM a mesh of a relief created from a height map, it is tessellated by a processor and the position of each vertex is known. When the zoom level changes, the processor recalculates the tessellation of the terrain.
Our camera looks at the scene near the shore. Rendering the entire relief will be a waste of computational resources, so instead the engine selects a submesh of the entire relief, only the part that is visible to the player, and transfers this smaller data subset to the video processor for rendering.
Normal map
First, only the normals are calculated. The first pass calculates the normals obtained by combining 5 layers (5 normal maps and 4 weights maps). Different normal maps are mixed together, all operations are performed in a
tangent space .

Calculations are performed in one draw call with 6 texture calls. You may notice that the result looks yellowish, unlike other normal maps, which usually have a blue tint. And really: here the blue channel is not used at all, there is only red and green.
But wait, the normal is a three-component vector, how can it be stored in just two components? In fact, the compression technique is used (it is discussed at the end of the post).
So let's assume for now that the red and green channels contain all the necessary information about the normals.
With the layers we’ve finished, it’s time for the decals: we add relief decals and buildings to change the layer normals.
We still haven't used the blue channel and alpha channel of our render.
So, the game reads from the 512x512 texture, representing all the elevation normals (baked from the original height map), and calculates its normal for each pixel using bicubic interpolation. Results are stored in blue and alpha channel.

The game then combines these two sets of normals (layer / decal normals and relief normals) into final normals used to calculate the illumination.

In this case, no compression is performed: the normals use 3 RGB channels, one for each component.
The map may look very green, but this is because the scene is rather flat, so the result is correct: you can take any pixel and calculate its normal vector using the
colorRGB * 2.0 - 1.0
formula, you can also check that the vector rate is 1.
Shadow map
The technique used to render shadows is called
Light Space Perspective Shadow Maps (LiSPSM). Here we have only the sun as a source of directional lighting. Each scene mesh is rendered, and the distance from it to the sun is stored in the red channel of the 1024x1024 texture. The LiSPSM technique calculates the best projected space to maximize the accuracy of the shadow map.
If we dwell on this, we can only draw hard shadows. In fact, when rendering units, the game tries to smooth out the edges of the shadows using
PCF sampling.
But even with the help of PCF, we still will not be able to achieve such beautiful smoothed shadows that we see in the screenshot, especially the smoothed silhouettes of buildings on the ground ... How to get them?
It seems that even in the final stages of the game development process, the problem of implementing shadows was still not resolved. Here is what Jonathan Mayor said 11 months before the game’s public release:
The shadows on these screenshots will not match the final version, and we are still working on them.
[...]
At the moment we have not finished work on the graphics of the game.
Jonathan Mayvore, February 24, 2006
Just a month after this announcement, a new amazing shadow mapping technique appeared:
Variance Shadow Maps (VSM) . She was able to very effectively render wonderful soft shadows.
It seems that the developers of SupCom tried to experiment with this new technique: when decompiling the D3D bytecode, a link to the DepthToVariancePS () function was found, which calculates the version of the shadow map with blur. Before VSM was invented, shadow maps could not be performed.
Here SupCom performs a 5x5 Gaussian blur (horizontal and vertical pass) for the shadow map.

However, the D3D bytecode does not contain instructions for storing the depth and square of the depth (information needed by the VSM technique). It seems that it is implemented only partially: it is possible that at the final stages of development there was no time to improve the technique, however, the existing code gives quite good results.
Note that the pseudo-VSM card was only used to create soft shadows on the ground.
When a shadow needs to be drawn on a unit, this is done using a LiSPSM card with PCF sampling. You can see the difference in the screenshot below (PCF has strong artifacts on the shadow border):

Relief with shadows
Thanks to the generated normal and shadow maps, you can finally begin to render the terrain: a textured mesh with lighting and shadows.

Decals
After calculation, using the information about the normals of the lighting equation, the components of the albedo decals are drawn.
Water reflections
On the right side of the scene we have the sea, so if the robot is in the water, we should see its reflection on the surface of the sea.
There is a classic trick for rendering reflections on the surface: an additional pass is performed and the vertical axis is scaled to -1 just before applying the camera transformation, so that the whole scene becomes symmetrical about the water surface (as in a mirror); It is this transformation that is needed to render the reflection. SupCom uses this technique and renders all reflected unit meshes onto the reflection map.

Mesh rendering
Then all the meshes are rendered in turn. For vegetation,
duplicate geometry is used to render multiple trees in a single draw call. The sea is rendered using one quadrilateral with a pixel shader causing multiple normal maps, refraction maps (scenes rendered up to this point), reflection maps (just generated above) and skybox for additional reflections.
Notice that in the last image there are small black artifacts on the sea near the edge of the screen; they arise from the fact that the sampling of the surface of the water is distorted to create the illusion of movement. Sometimes distortion introduces texels from outside the viewport, but this information does not exist, so black areas appear.
During the game, the UI hides these artifacts behind a thin frame that overlaps the edges of the viewport.
Mesh structure
Each unit in SupCom is rendered in one draw call. The model is determined by a set of textures:
- albedo card
- normal map
- A “reflection map” that actually contains more information than just reflections. This is an RGBA texture containing the following information:
- Red: the amount of reflection of the environment map (Reflection).
- Green: Specular reflections.
- Blue: Brightness. Used later to control the bloom (bloom).
- Alpha: Team Color. Changes the unit's albedo depending on the color of the team.

Particles
Then all particles are rendered, as well as health bars.
Rendering Particles and Health Indicators Bloom
It is time to add shine! But how do we get “brightness information” if we work with
LDR buffers? In fact, the brightness map is contained in the alpha channel, it is created at the same time as the previous meshes are drawn. A copy of a lower quality frame is created, an alpha channel is applied to highlight only bright areas, and then Gaussian blur is performed sequentially.

The blur buffer is then drawn over the original scene with additional blending.
User interface
We are done with the main stage. At the end, the UI is rendered, which is remarkably optimized: the only render call for rendering the entire interface. 1158 triangles are simultaneously transmitted to the GPU.
The pixel shader reads from a single 1024x1024 texture used as a
texture atlas . When you select another unit, the UI changes and the texture atlas is regenerated “on the fly” to pack a new set of sprites.
And on this we completed the analysis of the frame!
Additional Information
Level of detail
Since SupCom supports many variations of the zoom level, it actively applies
levels of detail (LOD).
If a player moves the camera away from the map, the number of visible units quickly increases; To cope with the increased load on the video processor, it is necessary to render simplified geometry and textures of a smaller size. Since units are very far away, the engine can drop them: models are replaced by
low-poly versions with reduced detail, but they are rendered on a screen so small that the player hardly notices differences from high-poly models.
LOD is used not only for units: after a certain limit, shadows, decals and props no longer render.
Fog of war
Due to the presence of
fog of war, each unit has its own line of sight and only the area next to the units is fully visible. Areas in which there are no units are filled with gray (opened earlier) or black (not yet investigated).
The game stores fog information in a 128x128 single-channel texture, which determines the fog density: 1 means no visibility, and 0 means full visibility.



Normal compression
As I promised, here is a brief explanation of the trick used in SupCom to compress normals. Normally, the normal is a three-component vector, but in the
tangent space the vector is expressed relative to the tangent to the surface: X and Y are on the tangent plane, and the component Z is always directed from the surface. By default, the normal is (0, 0, 1); that is why most normal maps are blue if the directions of the normals are not changed.
If we assume that the normal is a unit vector, then its length is one: X² + Y² + Z² = 1.
If the values of X and Y are known, then Z can have only two possible values: Z = ± √ (1 - X² - Y²).
But since Z is always directed from the surface, it must be positive, i.e. Z = √ (1 - X² - Y²).
That is why it is enough to store the values X and Y in the red and green channels, the value Z can be obtained from them. A more detailed (and better) explanation can be found in
this article (in English) .
Mixing normals
If we are talking about normals: SupCom performs some kind of
lerp between normal maps, using weights maps as coefficients. In fact, there are several ways to mix two normal maps that give different results; as explained in
this article (in English) , this is not such a simple problem.
Additional links
A detailed discussion of the topic of this article:
Slashdot ,
Hacker News ,
Reddit .