Hello Friend! This time I will again raise the question of the schedule in 
AAA games. I already sorted out the 
HDRR technique (not to be confused with HDRI) 
here and talked a little bit about color correction. Today I’ll tell you what is 
SSLR (also known as SSPR, SSR): 
Screen Space Local Reflections . Who cares - under cat. 
Introduction to Deferred Rendering
To begin, I will introduce such a thing as 
Deferred Rendering (not to be confused with 
Deferred Shading , as the latter refers to lighting). What is the essence of 
Deferred Rendering ? The fact is that all effects (such as lighting, global shading, reflections, 
DOF ) can be separated from the geometry and these effects can be implemented as a special kind of post-processing. For example, what do you need to apply 
DOF ( 
Depth Of Field , blur over long distances) to our scene? Have the scene itself ( 
Color Map ) and have information about the position of the texel (in other words, how many pixels away from the camera). Further - everything is simple. Apply 
Blur to the 
Color Map , where the blur radius will depend on the pixel depth (from 
Depth Map ). And if you look at the result - the farther the object, the stronger it will be blurred. So what does the 
Deferred Rendering technique do? It builds the so-called 
GBuffer , which usually includes three textures ( 
RenderTarget ):
- Color map (information about the diffuse component or just the color of the pixel)

 - Normal map (pixel normal information)

 - Depth map (information about the position of the "pixel", then store only the depth)

 
')
In the case of the 
Color map , the 
Normal map seems to be all clear, these are regular 
Surface.Color textures: perhaps, except that the normal vector can lie within 
[-1, 1] (using simple vector packaging in the format 
[0, 1] ).
But the situation with the 
Depth map becomes incomprehensible. How does the 
Depth map store information about the position of a pixel, and even with a single number? If to speak strongly simplified, the transformation of the primitive:
float4 vertexWVP = mul(vertex, World*View*Projection); 
Gives us screen coordinates:
 float2 UV = vertexWVP.xy; 
And some information about how far the pixel is from the camera:
 float depth = vertexWVP.z / vertexWVP.w; 
Based on this, we do not need 
UV , because when drawing a conventional quad on full screen, it is already known. Therefore, it is worth storing in the depth map not the pixel position, but only the depth.
In the future, we will be able to reconstruct the pixel position in a very simple way:
 float3 GetPosition(float2 UV, float depth) { float4 position = 1.0f; position.x = UV.x * 2.0f - 1.0f; position.y = -(UV.y * 2.0f - 1.0f); position.z = depth; //Transform Position from Homogenous Space to World Space position = mul(position, InverseViewProjection); position /= position.w; return position.xyz; } 
Let me remind you that to build 
GBuffer you need such a technique as 
MRT ( 
Multiple Render Targets ), which draws the model in several 
Render Target at once (and each 
RT contains different information). One of the rules of 
MRT - the dimension of all 
Render Target should be the 
same . In the case of 
Color Map , 
Normal Map - 
Surface.Color : 32 bit 
RT , where each 
ARGB channel has 8 bits, i.e. 256 gradations from 0 to 1.
Thanks to this approach, we can apply complex effects to any geometry, for example, the most popular Screen Space effect: SSAO (Screen Space Ambient Occlusion). This algorithm analyzes the depth and normal buffers, considering the level of shading. I will not describe the whole algorithm, it has already been 
described on Habré, I’ll just say that the algorithm’s task is to trace the depth map: we have a set of random vectors directed from the counted “pixel” and we need to find the number of intersections with the geometry.
Example effect (left without SSAO, right with SSAO):
Deferred Shading is also a 
Screen Space effect. Those. for each light on the screen (without any optimizations) we draw a quad in 
Additive mode in the so-called 
RenderTarget : 
Light Map . And knowing the world position of the “pixel”, its normal, the position of the light source - we can calculate the illumination of this pixel.
Deferred Shading example (lighting is postponed, after geometry has been drawn):
Advantages and problems Screen Space effects
The most important advantage of Screen Space effects is the independence of the complexity of the effect on geometry.
The main disadvantage is the locality of all effects. The fact is that we will constantly encounter 
Information Lost , in many cases this is highly dependent on the review, since 
SSE depends on the adjacent depths of texels that can be generated by any geometry.
Well, it is worth canceling that 
Screen Space effects are performed completely on the GPU and are post-processing.
Finally SSLR
After the whole theory, we came to the effect of 
Screen Space Local Reflections : local reflections in screen space.
First, let's look at the perspective projection:
The horizontal and vertical angle of view is set to 
FOV (usually 45 degrees, I prefer 60 degrees), in the virtual camera they are different. 
Aspect Ratio (aspect ratio) is also taken into account.
The projection window (where we operate with 
UV-space data) is what we see, we project our scene.
The front and rear cut-off planes are, respectively, 
Near Plane, Far Plane , also set in projection as parameters. In the case of 
Deferred Rendering, doing too far 
Far Plane is worth 
doing , because The accuracy of the 
Depth Buffer will drop a lot: it all depends on the scene.
Now, knowing the projection matrix and position on the projection window (as well as the depth) for each pixel, we calculate its position as follows:
 float3 GetPosition(float2 UV, float depth) { float4 position = 1.0f; position.x = UV.x * 2.0f - 1.0f; position.y = -(UV.y * 2.0f - 1.0f); position.z = depth; position = mul(position, InverseViewProjection); position /= position.w; return position.xyz; } 
After we need to find a vector of a look at this pixel:
 float3 viewDir = normalize(texelPosition - CameraPosition) 
The CameraPosition is the camera position.
And find the reflection of this vector from the normal in the current pixel:
 float3 reflectDir = normalize(reflect(viewDir, texelNormal)) 
Next, the task is reduced to tracing the depth map. Those. we need to find the intersection of the reflected vector with any geometry. It is clear that any tracing is done through iteration. And we are very limited in them. Since each 
Depth Map sample is worth the time. In my version, we take some initial approximation 
L and dynamically change it based on the distance between our texel and the position that we “restored”:
 float3 currentRay = 0; float3 nuv = 0; float L = LFactor; for(int i = 0; i < 10; i++) { currentRay = texelPosition + reflectDir * L; nuv = GetUV(currentRay); //     float n = GetDepth(nuv.xy); //    DepthMap  UV float3 newPosition = GetPosition2(nuv.xy, n); L = length(texelPosition - newPosition); } 
Auxiliary functions, the translation of the world point on the screen space:
 float3 GetUV(float3 position) { float4 pVP = mul(float4(position, 1.0f), ViewProjection); pVP.xy = float2(0.5f, 0.5f) + float2(0.5f, -0.5f) * pVP.xy / pVP.w; return float3(pVP.xy, pVP.z / pVP.w); } 
After the completion of the iterations, we have the position of “intersection with the reflected geometry”. And our nuv value will be the projection of this intersection onto the screen, i.e. 
nuv.xy is the 
UV coordinates in our screen space, and 
nuv.z is the restored depth 
(i.e. abs (GetDepth (nuv.xy) -nuv.z) must be very small) .
At the end of the iterations, 
L will show the distance of the reflected pixel. The last step is actually adding a reflection to the 
Color Map :
 float3 cnuv = GetColor(nuv.xy).rgb; return float4(cnuv, 1); 
Let's dilute the theory with illustrations, the original image (the contents of the Color Map from the GBuffer):
After compiling the shader (reflection) we get the following picture (Color Map from GBuffer + SSLR shader result):
Not a lot . And here it is worth recalling once again that 
Space-Screen effects are solid 
Information Lost (examples are highlighted in red frames).
The fact is that if the reflection vector goes beyond the limits of the 
Space-Screen , the information on the 
Color card becomes inaccessible and we see 
Clamping of our 
UV .
To partially fix this problem, you can enter an additional factor that will reflect the “range” of reflection. And further on this coefficient we will shade the reflection, the problem is partially solved:
 L = saturate(L * LDelmiter); float error *= (1 - L) 
The result, a reflection multiplied by 
error (an attempt to remove the SSLR artifact - information lost):
Already better, but we notice another problem, what happens if the vector is reflected in the direction of the camera? 
Clamping 'and 
UV will not occur, however, despite the relevance of UV (x> 0, y> 0, x <1, y <1) it will be incorrect:
This problem can also be partially solved, if you somehow limit the angles of permissible reflections. A chip with angles from 
the Fresnel effect is ideal for this:
 float fresnel = dot(viewDir, texelNormal) 
Slightly modify the formula:
 float fresnel = 0.0 + 2.8 * pow(1+dot(viewDir, texelNormal), 2) 
Fresnel values, taking into account the Normal-mapping (values ​​of the fresnel variable for the SSLR algorithm):
Those areas that are reflected in the “camera” will be black, and we do not take them into account (in return, you can make a fade into a cubic texture).
Reflection multiplied by 
error and 
fresnel (attempting to remove most of the SSLR artifacts):
By the way, the value of 
Fresnel should be limited by any parameter, since due to the “roughness” of the normals, the value will be an order of magnitude greater than one (or another limiter number).
And the final stage of today's article is a blur of reflections, because perfect reflection only at the mirror. The degree of blurring can be considered as 
1-error (the farther the reflected pixel is, the more blurry it is). This will be a kind of weight blur and you can store it in the alpha channel 
RT- reflections.
Result (final image with harvested artifacts and blurred reflections):
Conclusion
Also, it is worth adding some information about the reflectivity: how much a clear reflection, how much the surface is generally able to reflect, in those places where SSLR does not work - add a static reflection of the cube texture.
Of course, the 
Space-Screen effects are not honest, and the developers are trying to hide the artifacts, but now in real-time this cannot be done (with complex geometry). And without such effects, the game starts to look somehow wrong. I described the general method of 
SSLR : I brought the main points from the shader. The code, unfortunately, can not attach, because There are too many dependencies in the project.
Successful developments! ;)