Most (if not all) optical phenomena exhibited by materials can be recreated by simulating the propagation and interaction of individual light rays. This approach is called “ray tracing” in scientific literature, and it is often too computationally expensive to use in real time. Most modern engines use strong simplifications that, despite the impossibility of creating photorealism, can provide fairly convincing approximate results. In this tutorial, I’ll talk about a quick, cheap, and convincing solution that can be used to simulate translucent materials with subsurface scattering .
Before... ')
… and after.
Introduction
The standard Unity material has a transparency mode (Transparency mode) that allows you to render transparent materials. In this context, transparency is implemented using alpha blending . A transparent object is rendered on top of the finished geometry, partially showing what is behind it. This works for many materials, but proznachnost is a special case of a more general property called translucency (sometimes also translucidity ). Transparent materials only affect the amount of light transmitted through them (in the figure below on the left), and the illuminated ones change the path of its passage (on the right).
The result of this behavior is obvious: translucent materials scatter the rays of light passing through them, blurring what is behind the object. This behavior is rarely used in games, because it is much more difficult to implement. Transparent materials can be implemented straightforwardly - by alpha blending, without ray tracing. Translucent materials require the simulation of the deflection of light rays. Such calculations are very costly and rarely cost labor in real-time rendering.
This also often prevents the achievement of the simulation of other optical phenomena, such as subsurface scattering . When light falls on the surface of a translucent material, part of it spreads inward, colliding with molecules until it finds a way out. Often this leads to the fact that light absorbed at one point is emitted by the material at another point. Subsurface scattering creates a diffuse glow, which can often be seen on materials such as human skin, marble and milk.
Real-time translucency
The costing calculations of translucency make two major obstacles. The first is that a simulation of the scattering of light rays inside the material is required. Each beam can be divided into several, reflecting hundreds or even thousands of times inside the material. The second obstacle, a beam absorbed at one point, is emitted at some other. This seems like a small problem, but actually it is a serious obstacle.
To understand why this is happening, we first need to figure out how most shaders work. In real-time rendering, the video processor expects the shader to be able to calculate the final color of the material using only local properties. Shaders are implemented in such a way that they can effectively access only properties local to each vertex. It is very easy to consider the direction of the normal and the albedo vertices, but getting these values from the neighboring vertices is not an easy task. In most real-time systems, one has to somehow circumvent these limitations and find a way to mimic the propagation of light in the material without using non-local information.
The approach described in this tutorial is based on the solution presented at the GDC 2011 by Colin Barre-Brieseboux and Mark Bouchard in the report Approximating Translucency for a Fast, Cheap and Convincing Subsurface Scattering Look . Their solution is integrated into the Frostbite 2 engine that was used in the DICE Battlefield 3 game. Although the approach presented by Colin and Mark is physically inaccurate, it provides plausible results at a very low price.
The idea behind this solution is very simple. On opaque materials, light acts directly from the light source. Peaks tilted more than 90 degrees from the direction of light L , do not receive any coverage at all (in the figure below left). In accordance with the model presented in the report, there is an additional effect of light on translucent materials, which is associated with −L . Geometrically −L can be perceived as if part of the lighting actually passes through the material and gets to its back side (in the figure below to the right).
That is, each light source is counted as two separate effects on the reflection: lighting the front and rear. We want our materials to be as realistic as possible, so we use standard Unity PBR lighting models for front lighting. We need to find a way to describe the impact. −L and render it in such a way that it somehow simulates the dispersion process that could occur inside the material.
Back transmission
As stated above, the final color of the pixels depends on the sum of the two components. The first of these is “traditional” lighting. The second is the effect of light from a virtual source illuminating the back side of the model. This will give us the feeling that the light from the original source actually passed through the material.
To understand how to model this mathematically, let's create a diagram of the following two cases (the diagrams below). At the moment we are drawing a red dot. Since it is on the “dark” side of the material, it should be illuminated −L . Let's analyze two limiting cases from the point of view of an outsider. We see that Vb is in line with −L and parallel to it, and this means that the observer B must fully see back translucency. On the other hand, the observer A must see the least amount of reverse illumination because it is perpendicular to −L .
If you are familiar with writing shaders, then such reasoning should be familiar to you. We have already met with something similar in the tutorial "Physically Based Rendering and Lighting Models in Unity 5" , where we have shown that this behavior can be implemented using a mathematical operator called a scalar product .
As a first approximation, we can say that the amount of reverse illumination due to translucency, Iback proportionally Vcdot−L . In a traditional diffuse shader, this is written as NcdotL . We see that the normal to the surface is not included in the calculations, because the light simply comes from the material, and is not reflected from it.
Subsurface distortion
However, the normal to the surface must have some influence, at least a small one, on the angle at which the light emanates from the material. The authors of this technique have introduced a parameter called subsurface distortion.delta force vector −L point in the direction N . From the point of view of physics, this sub-superhigh distortion controls the degree of deflection of the normal to the surface of the outgoing backlight. In accordance with the proposed system, the intensity of the reverse translucency component turns into:
Iback=VcdotleftlangleL+Ndeltarightrangle
Where leftlangleXrightrangle=fracXleft|Xright| - the unit vector pointing in one direction with X . If you are familiar with Cg / HLSL, then this is a normalize function.
With delta=0 we return to Vcdot−L obtained in the previous section. However, at delta=1 we calculate the scalar product between the direction of gaze and −leftlangleL+Nrightrangle . If you know the calculation of the Blinn-Fong reflection , then you should know that leftlangleL+Nrightrangle - this is the vector "between" L and N . Therefore, we will call it the half directionH .
The diagram above shows all the directions we have used so far. H marked in purple and you see that it is between L and N . In terms of geometry, change delta from 0 before 1 leads to a shift in the perceived direction of light L . The shaded area shows the range of directions from which backlight will come. The figure below shows that with delta=0 the object appears lit from a violet light source. When it changes delta to 1, the perceived direction of the light source shifts to violet.
Purpose delta - Simulation of the tendency of some translucent materials to disperse reverse illumination with different intensities. Higher values delta the more the backlight is diffused.
Does H here have the same meaning as H in terms of Blinn-Phong reflection?
Not. In the reflection on Blinnu Fong H defined as leftlangleL+Vrightrangle . Here we use the same letter to denote leftlangleL+Nrightrangle .
Is the delta interpolating between L and L + N?
Yes. Meanings delta from 0 before 1 interpolate linearly between L and L+N . This can be seen by expanding the traditional definition of linear interpolation from L and L+N with delta :
Why did it happen that the authors did not normalize L + N?
Geometrically, the magnitude L+N It does not have a unit length, that is, it must be normalized. In their final system, the authors do not perform normalization.
In the end, this whole effect should be neither photorealistic nor based on physics. In their report, the authors made it very clear that it was intended to be used as a quick approximation of translucency and subsurface scattering. Normalization does not greatly change the results, but adds significant delays.
Back light diffusing
At this stage of the tutorial, we already have an equation that can be used to simulate translucent materials. Magnitude Iback cannot be used to calculate the final effect of illumination.
Here you can use two basic approaches. The first is to apply a texture. If you need complete artistic control over how light is scattered in the material, then you need to limit Iback in the range of 0 before 1 and use it to sample the final intensity of the reverse illumination. Textures with different linear variations will simulate the propagation of light in different materials. Below we will look at how this can be used to significantly change the results of this shader.
However, the approach used by the author of this technique does not apply texture. In it, the curve is created only by the Cg code:
Two new parameters p ( degree ) and s ( scale ) are used to change curve properties.
Conclusion
In this part of the article, we talked about the technical difficulties of rendering translucent materials. An approximate solution and approach is proposed, presented in the report Approximating Translucency for a Fast, Cheap and Convincing Subsurface Scattering Look . In the next part of the tutorial, we will focus on the actual implementation of this effect in the Unity shader.
If you are interested in more sophisticated approaches to simulating subspace dispersion in real-time applications, then you can explore one of the best GPU Gems tutorials.
Part two
Introduction
In the previous part of the tutorial, a mechanism is explained that allows you to approximate the appearance of translucent materials. The shading of traditional surfaces is performed based on the illumination obtained from L . The shader we write will add another component. −L , which will be used de facto as if the material is illuminated by a light source from the back. In doing so, it will look as if the light from L passes through the material.
Finally, we derived an equation depending on the direction of the gaze to model reflection from reverse illumination:
L - this is the direction from which the light comes (the direction of light ),
V - the direction of the camera looking at the material ( gaze direction ),
N - orientation of the surface at the point to be rendered ( normal to the surface ).
There are additional parameters that can be used to control the final appearance of the material. for example delta changes the perceived direction of the reverse illumination so that it is more parallel to the surface normal:
And finally p and s ( degree and scale ) determine the distribution of reverse illumination and work in a manner similar to the parameters of the same name in the Blinn Fong reflection calculation.
Now we just have to implement the shader.
Extend the capabilities of the standard shader
As explained above, we want this effect to be as realistic as possible. The best solution would be to expand the functions of the standard shader (Standard shader) Unity, which initially provides quite good results for non-transparent materials.
How to expand the capabilities of the standard shader?
In short, the main idea is to create a new surface shader and replace its lighting function with its own. Here we will call the original standard lighting function so that the material is rendered using the Unity PBR shader.
After creating it, we will be able to calculate the effect of reverse lighting and mix it with the original color provided by the standard lighting function. For a good approximation:
Let's call the new lighting function that we will use in this effect, StandardTranslucent . Reverse lighting will have the same color as the original lighting. We can only manage intensity I
When adding two colors you need to be careful that the value does not exceed 1 . This is typically implemented by the saturate function, which limits each of the color components to an interval from 0 before 1 .
If your camera uses HDR support ( high-dynamic range, high dynamic range ), then the values are higher 1 apply to post processing effects such as bloom. In this shader, we do not saturate the final color, because the bloom filter will be applied during the final rendering.
Back lighting
In accordance with the equations described in the first part of the tutorial, we can write the following code:
inline fixed4 LightingStandardTranslucent(SurfaceOutputStandard s, fixed3 viewDir, UnityGI gi) { // fixed4 pbr = LightingStandard(s, viewDir, gi); // --- --- float3 L = gi.light.dir; float3 V = viewDir; float3 N = s.Normal; float3 H = normalize(L + N * _Distortion); float I = pow(saturate(dot(V, -H)), _Power) * _Scale; // pbr.rgb = pbr.rgb + gi.light.color * I; return pbr; }
The above code is a direct implementation of the equations from the first part of the article. The resulting translucency effect looks plausible, but it is not related to the thickness of the material. Therefore, it is very difficult to manage it.
Local thickness
Obviously, the amount of backlighting strongly depends on the density and thickness of the material. Ideally, we need to know the distance traveled by the light inside the material and weaken it accordingly. As can be seen from the figure below, three different beams with the same angle of incidence travel very different distances within the material.
From the point of view of the shader, we have no access to either local geometry or the history of the rays of light. Unfortunately, there is no way to solve this problem locally. It will be best to use an external map of local thicknesses . This is the texture associated with the surface, which determines the "thickness" of the corresponding part of the material. The concept of "thickness" is used arbitrarily, because the true thickness depends on the angle at which the light falls.
The diagram above shows that there is no unique concept of “thickness” associated with a red point of a circle. The amount of material through which the light passes actually depends on the angle of incidence of the light. L . That is, we need to remember that this whole approach to the implementation of translucency tends not to physical accuracy, but to deceive the player’s eye. Below ( source ) a good map of local thicknesses is shown, visualized on the model of the statue. The shades of white correspond to the parts in which the translucency effect will be stronger, approximating the concept of thickness.
How to generate a local thickness map?
The author of this technique offered an interesting way to automatically create a map of local thicknesses from any model. To do this, follow these steps:
Unscrew the faces of the model
Render ambient occlusion to texture
Invert texture colors
The logic of this process is that when rendering ambient occlusion on the back faces, you can approximately " average the whole light transfer inside the object ".
Instead of texture, thickness can be stored directly in the vertices.
Final version
Now we know that we need to take into account the local thickness of the material. The easiest way to do this is to create a texture map that can be sampled. Although it is physically inaccurate, we get convincing results. In addition, the local thickness is encoded in such a way that allows artists to fully control the effect.
In this implementation, the local thickness is stored in the red channel of the additional texture, which is sampled as a function of the surf :
How is it that the texture is not sampled in the lighting function?
I decided to store this value in the variable thickness , which is later accessed by the lighting function. Personally, I tend to do this whenever I have to sample a texture that will later need the lighting function.
If you prefer to do otherwise, you can sample the texture directly in the lighting function. In this case, you need to transfer UV coordinates (perhaps by expanding SurfaceOutputStandard ) and use tex2Dlod instead of tex2D . The function receives two additional coordinates. In our case, you can set them to "zero":
Colin and Mark suggested a slightly different equation to calculate the final intensity of the reverse illumination. It takes into account both the thickness and the optional attenuation parameter . In addition, they allowed the possibility of using the constantly present additional component of the environment :