Learn OpenGL. Lesson 6.3 - Image Based Lighting. Diffuse irradiation
Image-based lighting or IBL ( Image Based Lighting ) is a category of lighting methods that are not based on analytical light sources (discussed in the previous lesson ), but consider the entire environment of the illuminated objects as one continuous light source. In general, the technical basis of such methods lies in processing a cubic environment map (prepared in the real world or based on a three-dimensional scene) so that the data stored in the map can be directly used in lighting calculations: virtually every texel of a cubic map is considered as a source of light. . In general, this allows you to capture the effect of global illumination in the scene, which is an important component that conveys the overall “tone” of the current scene and helps the illuminated objects to be better “embedded” in it.
Since IBL algorithms take into account the lighting from some kind of “global” environment, their result is considered to be a more accurate imitation of background lighting or even a very rough approximation of global lighting. This aspect makes IBL methods interesting in terms of inclusion in the PBR model, since the use of ambient lighting in the lighting model allows objects to look much more physically correct.
To incorporate the influence of IBL into the already described PBR system, let’s return to the familiar reflectivity equation:
As previously described, the main goal is to calculate the integral for all incoming radiation directions. over the hemisphere . In the last lesson, the calculation of the integral was not burdensome, because we knew in advance the number of light sources, and, therefore, all those several directions of incidence of light that correspond to them. At the same time, the integral from the swoop does not solve: any falling vector from the environment can carry with them a non-zero energy brightness. As a result, for the practical applicability of the method, the following requirements must be met:
You need to think of a way to get the energy brightness of the scene for an arbitrary direction vector ;
It is necessary that the solution of the integral can occur in real time.
Well, the first point is resolved by itself. Here, a hint of a solution has already passed: one of the methods for representing the irradiance of a scene or environment is a cubic map that has undergone special processing. Each texel in such a map can be considered as a separate radiating source. By sampling from such a map on an arbitrary vector we easily get the energy brightness of the scene in that direction.
So, we get the energy brightness of the scene for an arbitrary vector. :
Remarkably, however, the solution of the integral requires us to take samples from the environment map not from one direction, but from all possible ones in the hemisphere. And so - for each shaded fragment. Obviously, for real-time tasks this is practically impracticable. A more effective method would be to calculate part of integrands in advance, still outside our application. But for this you have to roll up your sleeves and plunge deeper into the essence of the expression of reflectivity:
It is seen that the parts of the expression associated with the scattered and mirror BRDF components are independent. You can divide the integral into two parts:
Such a division into parts will allow us to deal with each of them separately, and in this lesson we will deal with the part responsible for the ambient lighting.
After analyzing the form of the integral over the diffuse component, we can conclude that the diffuse component of Lambert is essentially a constant (color refractive index and are constant in the conditions of the integrand) and does not depend on other variables. Taking this fact into account, we can take the constants for the integral sign:
So we get an integral depending only on (it is assumed that corresponds to the center of the cubic environment map). Based on this formula, you can calculate or, even better, pre-calculate a new cubic map that stores the result of calculating the integral of the diffuse component for each direction of the sample (or texel map) using convolution operation.
Convolution is the operation of applying some calculation to each element in the data set, taking into account the data of all the other elements of the set. In this case, such data is the energy brightness of the scene or the environment map. Thus, to calculate a single value in each direction of the sample in a cubic map, we will have to take into account the values taken from all other possible directions of the sample on a hemisphere lying around the sample point.
To convolve the environment map, one needs to solve the integral for each resulting direction of sampling. by implementing multiple discrete samples along directions owned hemisphere and averaging the total energy brightness. The hemisphere on the basis of which the sampling directions are taken oriented along vector representing the target direction for which the convolution is currently being calculated. Look at the picture for a better understanding:
Such a pre-calculated cubic map that stores the result of integration for each direction of the sample , can also be considered as storing the result of summing up all indirect diffuse illumination in a scene falling on some surface oriented along the direction . In other words, such cubic maps are called irradiance maps (irradiance maps), since a cubic environment map subjected to preliminary convolution allows one to directly sample the magnitude of the scene irradiance coming from an arbitrary direction. , without additional calculations.
The expression that determines the energy brightness also depends on the position of the sampling point. which we took lying in the center of the irradiance map. This assumption imposes a limitation in the sense that the source of all indirect diffuse illumination will also be the only environmental map. In scenes of different lighting, this can destroy the illusion of reality (especially in indoor scenes). Modern rendering engines solve this issue by placing special auxiliary objects in the scene - reflection probes . Each such object deals with one task: it forms its own irradiance map for its immediate environment. With such a technique, irradiance (and energy brightness) at an arbitrary point will be determined by a simple interpolation between the nearest reflection samples. But for the current tasks, we agree that the selection of the environment map is conducted from its very center, and we shall analyze the reflection samples in further lessons.
Below is an example of a cubic environment map and an irradiance map obtained on its basis (authored by the wave engine ), averaging the energy brightness of the environment for each output direction. .
So, this map stores the result of convolution in each texel (corresponding to the direction ), and externally, such a map looks like it stores the averaged color of the environment map. A sample in any direction from such a map will return the value of irradiance coming from this direction.
PBR and HDR
In the previous lesson , it was already briefly noted that for the correct operation of the PBR lighting model, taking into account the HDR range of the brightness of the present light sources is extremely important. Since the PBR model at the input accepts parameters in one way or another based on quite specific physical quantities and characteristics, it is logical to require compliance of the energy brightness of the light sources with their real prototypes. It does not matter how we justify a specific amount of radiation flux for each source: if we make a rough engineering estimate or turn to physical quantities , the difference in characteristics between the room lamp and the sun will be huge in any case. Without using the HDR range, it will simply be impossible to accurately determine the relative brightness of various light sources.
So PBR and HDR are friends forever, this is understandable, just how does this fact relate to image-based lighting methods? In the last lesson, it was shown that transferring PBR to the HDR rendering range is simple. One thing remains “but”: since the indirect illumination from the environment is based on a cubic environment map, a way is needed to preserve the HDR characteristics of this background lighting in the environment map.
Up to this point, we used environment maps created in the LDR format (such as skyboxes ). We used a selection of colors from them in the rendering as is, and this is quite acceptable for direct object shading. And it is completely unsuitable when using environment maps as sources of physically reliable measurements.
RGBE - image format in the HDR range
Get to know the RGBE image file format. Files with the extension " .hdr " are used to store images with a wide dynamic range, allocating one byte for each element of the color triad and another byte for the total exponent. Including the format, it allows storing environment cubic maps with a range of color intensity beyond the LDR range [0., 1.]. This means that light sources can retain their real intensity, being represented by such an environment map.
The network has enough free environment maps in the RGBE format, taken in various real world conditions. For example, an example from the sIBL archive site:
You may be surprised at what you see: after all, this distorted image is not at all like an ordinary cubic map with its pronounced breakdown into 6 faces. The explanation is simple: this environment map was projected from a sphere onto a plane - an equal-right scan was applied. This is done to allow storage in a format that does not support the storage mode of cubic cards as is. Of course, this projection method carries with it drawbacks: the horizontal resolution is much higher than the vertical. In most cases, the use in rendering is a valid ratio, since usually interesting details of the environment and lighting are located exactly in the horizontal plane, and not in the vertical. Well, plus to everything, we need the conversion code back to the cubic map.
RGBE format support in stb_image.h
Downloading this image format on your own requires knowledge of the format specification , which is not difficult, but it is still time consuming. Fortunately, the image loading library stb_image.h , implemented in a single header file, supports loading of RGBE files, returning an array of floating point numbers - this is what we need for our purposes! Having added a library to your project, loading of these images is implemented extremely simply:
The library automatically converts values from the internal HDR format to ordinary real 32-bit numbers, by default with three color channels. It is quite enough to save the original HDR image data in a regular 2D floating point texture.
Conversion of an equal-right scan of an image into a cubic map
An even-right scan can be used for direct sampling from the environment map, but this would require costly mathematical operations, while a sample from a normal cube map would be practically free of performance. It is for these reasons in this lesson that we will be engaged in converting an equal-rectangular image into a cubic map, which will be used further. However, the method of direct sampling from an equal-right map using a three-dimensional vector will also be shown here so that you can choose the method of work that is right for you.
To convert, you need to draw a cube of a single size, watching it from the inside, project an equal-rectangular map on its face, and then extract six images from the faces as faces of a cube map. The vertex shader of this stage is quite simple: it simply processes the vertices of the cube as it is, and also transfers their non-transformed positions to a fragment shader for use as a three-dimensional vector of the sample:
In the fragment shader, we shade each face of the cube as if we tried to carefully wrap the cube with a sheet with an equal-right card. To do this, the sampling direction transferred to the fragment shader is taken, processed by special trigonometric magic, and, ultimately, a sample is taken from the equal-right map as if it were actually a cube map. The result of the selection is directly saved as a fragment color of the face of the cube:
If you actually draw a cube with this shader and an associated HDR environment map, you get something like this:
Those. it can be seen that in fact we projected a rectangular texture onto a cube. That's great, but how will this help us in forming a real cube map? To finish this task, it is necessary to render the same cube 6 times with a camera looking at each of the faces, while recording the output into a separate frame buffer object:
After this preparation, it will only be necessary to directly transfer parts of the equal-right map to the verge of the cube map.
We will not go into much detail, especially since the code repeats much seen in the lessons on the frame buffer and omnidirectional shadows . In principle, it all comes down to preparing six separate species matrices that orient the camera strictly to each of the faces of the cube, as well as a special projection matrix with a 90 ° angle of view to capture the entire face of the cube. Then, the render is simply performed six times, and the result is saved to the floating-point framebuffer:
It uses the attachment of the framebuffer color, and alternately changing the connected face of the cube map, which leads to the direct rendering of the render to one of the faces of the environment map. This code needs to be executed only once, after which we will have on hand a full-fledged envCubemap environment map containing the result of the conversion of the original equal-rectangular version of the HDR environment map.
Let's test the resulting cubic map by sketching a simple skybox shader:
Pay attention to the trick with the components of the vector clipPos : we use the tetrad xyww when writing the transformed vertex coordinates to ensure that all skybox fragments have a maximum depth of 1.0 (the approach has already been used in the relevant lesson ). Do not forget to change the comparison function to GL_LEQUAL :
glDepthFunc(GL_LEQUAL);
The fragment shader simply selects from the cube map:
The sample from the map is based on the interpolated local coordinates of the cube vertices, which is the correct direction of the sample in this case (again, discussed in the skybox lesson, approx. Lane ). Since the transfer components in the view matrix were ignored, the skybox render will not depend on the observer's position, creating the illusion of an infinitely distant background. Since here we directly output data from the HDR card to the default framebuffer, which is an LDR receiver, it is necessary to recall the tone mapping. And lastly: almost all HDR cards are stored in a linear space, which means that you need to apply gamma correction as the final processing chord.
So, when you output the received skybox, along with the already familiar array of spheres, you get something like this:
Well, a lot of effort has been spent, but in the end we successfully mastered reading the HDR environment map, converting it from a right-angled to a cubic map, and outputting the HDR cubic map as a skybox to the scene. Moreover, the conversion code into a cubic map by means of a six-sided rendering of a cubic map is useful to us and further - in the task of convolving the environment map . The code for the entire conversion process is here .
Cubic card convolution
As stated at the beginning of the lesson, our main goal is to solve the integral for all possible directions of indirect diffuse illumination, taking into account the given irradiance of the scene in the form of a cubic map of the environment. It is known that we can get the value of the energy brightness of the scene for any direction by sampling from HDR a cubic environment map in this direction. To solve the integral, it will be necessary to sample the energy brightness of the scene from all possible directions in the hemisphere. each considered fragment. Obviously, the task of sampling the lighting from the environment from all possible directions in the hemisphere is computationally impracticable - there are an infinite number of such directions. However, it is possible to apply approximation by taking a finite number of directions chosen randomly or evenly spaced inside the hemisphere.This will make it possible to obtain a fairly good approximation to the true irradiance, in effect solving the integral of interest to us in the form of a finite sum.
But for real-time tasks, even this approach is still incredibly superimposed, because the samples are made for each fragment, and the number of samples must be high enough for an acceptable result. So it would be nice to prepare in advance the data for this step, outside the rendering process. Since the orientation of the hemisphere determines from which region of space we imprint the irradiance, it is possible to calculate in advance the irradiance for each possible orientation of the hemisphere based on all possible outgoing directions :
As a result, for a given arbitrary vector , we will be able to sample from a pre-calculated irradiance map in order to obtain a diffuse irradiance value in this direction. To determine the amount of indirect diffuse radiation at the point of the current fragment, we take the total irradiance from a hemisphere oriented along the normal to the fragment surface. In other words, getting the irradiance of a scene comes down to a simple sample:
vec3 irradiance = texture(irradianceMap, N);
Further, to create an irradiance map, it is necessary to perform a convolution of the environment map converted to a cubic map. We know that for each fragment its hemisphere is considered oriented along the normal to the surface. . In this case, the convolution of the cubic map is reduced to the calculation of the average sum of the energy brightness from all directions inside hemisphere oriented along the normal :
Fortunately, the time-consuming preliminary work that we did at the beginning of the lesson will now allow you to quite simply convert the environment map as a cubic card in a special fragment shader, the output of which will be used to form a new cubic map. For this, the same piece of code that was used to translate the equal-right environment map into a cubic map comes in handy.
It only remains to take another processing shader:
#version 330 core out vec4 FragColor; in vec3 localPos; uniform samplerCube environmentMap; const float PI = 3.14159265359; void main() { // vec3 normal = normalize(localPos); vec3 irradiance = vec3(0.0); [...] // FragColor = vec4(irradiance, 1.0); }
Here, the environmentMap sampler is an HDR cubic environment map, previously derived from a rectangular one.
There are many ways to perform a convolution of the environment map; in this case, for each texel of a cubic map, we will create several sample vectors from a hemisphere , oriented along the direction of the sample, and average results. The number of sampling vectors will be fixed, and the vectors themselves will be evenly distributed inside the hemisphere. I note that the integrand is a continuous function, and the discrete estimate of this function will be only an approximation. And the more sample vectors we take, the closer we will be to the analytical solution of the integral. Integral part of the expression for reflectivity depends on the solid angle
- values with which it is not very convenient to work. Instead of integrating over the solid angle we change the expression, resulting in integration over spherical coordinates and :
Phi angle will represent the azimuth in the plane of the base of the hemisphere, varying from 0 to . Angle will represent the elevation angle, varying from 0 to . The modified expression for reflectivity in such terms is as follows:
The solution of such an integral will require taking a finite number of samples in the hemisphere and averaging results. Knowing the number of samples and for each of the spherical coordinates, we can translate the integral to theRiemannian sum:
Since both spherical coordinates change discretely, at each moment the sample is taken from a certain averaged area over the hemisphere, as can be seen in the figure above. Due to the nature of the spherical surface, the size of the area of a discrete sample inevitably decreases with increasing elevation and approaching the zenith. To compensate for this effect of reducing the area, we added a weighting factor to the expression. .
As a result, the implementation of discrete sampling in a hemisphere based on spherical coordinates for each fragment in the form of a code looks like this:
The variable sampleDelta determines the size of the discrete step over the surface of the hemisphere. By changing this value, it is possible to increase or decrease the accuracy of the result.
Inside both cycles, the usual 3-dimensional vector of the sample is formed from spherical coordinates, transferred from the tangent to world space, and then used to select the cubic environment map from HDR. The result of the samples is accumulated in the irradiance variable , which in the final processing will be divided by the number of samples taken in order to obtain the average value of irradiance. Note that the result of sampling from a texture is modulated by two values: cos (theta) - to take into account the attenuation of light at large angles, and sin (theta)- to compensate for the reduction of the sample area when approaching the zenith.
It remains only to deal with the code that renders and captures the results of the convolution of the envCubemap environment map . First, create a cubic map to store the irradiance (you will need to perform it once, before entering the main render cycle):
Since the irradiance map is obtained on the basis of averaging evenly distributed samples of the energy brightness of the environment map, it almost does not contain high-frequency parts and elements — a rather small resolution texture (here 32x32) and linear filtering will be enough to store it.
Next, configure the capture framebuffer for this resolution:
The code for capturing the results of a convolution is similar to the code for converting an environment map from equilateral to cubic, only using the convolution shader:
irradianceShader.use(); irradianceShader.setInt("environmentMap", 0); irradianceShader.setMat4("projection", captureProjection); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_CUBE_MAP, envCubemap); // glViewport(0, 0, 32, 32); glBindFramebuffer(GL_FRAMEBUFFER, captureFBO); for (unsigned int i = 0; i < 6; ++i) { irradianceShader.setMat4("view", captureViews[i]); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, irradianceMap, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); renderCube(); } glBindFramebuffer(GL_FRAMEBUFFER, 0);
After completing this stage, we will have a pre-calculated irradiance map on our hands, which can be directly used to calculate indirect diffuse illumination. To check how the convolution took place, try replacing the skybox texture from the environment map with the irradiance map:
If as a result you saw something resembling a strongly blurred map of the environment, then, most likely, the convolution was successful.
PBR and indirect illumination
The resulting irradiance map is used in the diffuse part of the divided expression of reflectivity and represents the accumulated contribution from all possible directions of indirect illumination. Since in this case the light does not come from specific sources, but from the environment as a whole, we consider diffuse and specular indirect illumination as ambient , replacing the previously used constant value.
For a start, let's not forget to add a new sampler with an irradiance map:
uniform samplerCube irradianceMap;
Having an irradiance map that stores all the information about the indirect diffuse radiation of a scene, and the normal to the surface, it’s as easy to get data on the irradiance of a particular fragment as to make one sample from the texture:
However, since indirect radiation contains data for both the diffuse and specular components (as we saw in the component-specific version of the reflectivity expression), we will need to modulate the diffuse component in a special way. Just as in the previous lesson, we use the Fresnel expression to determine the degree of reflection of light for a given surface, from which we obtain the degree of refraction of light or the diffuse coefficient:
Because background lighting falls from all directions in a hemisphere based on surface normals , it is impossible to determine a single median (halfway) vector to calculate the fresnel coefficient. In order to imitate the Fresnel effect in such conditions, we have to calculate the coefficient based on the angle between the normal and the vector of observation. However, previously, as a parameter, we used the median vector, derived from the model of microsurfaces and depending on the surface roughness, to calculate the Fresnel coefficient. Since in this case the roughness is not included in the calculation parameters, the degree of reflection of light by the surface will always be overestimated. Indirect lighting as a whole should behave in the same way as direct lighting, i.e. From rough surfaces, we expect a lesser degree of reflection at the edges. But since the roughness is not taken into account,then the degree of mirror Fresnel for indirect illumination looks unrealistic on rough non-metallic surfaces (in the image below, the described effect is exaggerated for greater clarity):
To get around this trouble, you can add roughness to the Fremen-Schlick expression, a process described by Sébastien Lagarde :
As it turned out, the use of illumination based on the image is inherently reduced to a single sample from the cube map. All the difficulties are mainly related to the preliminary preparation and transfer of the environment map to the irradiance map.
Taking a familiar scene from a lesson about analytic light sources containing an array of spheres with varying metallicity and roughness, and adding diffuse background lighting from the surroundings, you get something like this:
It still looks strange, because materials with a high degree of metallicity still require reflections in order to really look like, hmm, metal (metals do not reflect diffuse lighting). And in this case, the only reflections obtained from point analytical light sources. And yet, we can already say that the spheres look more immersed in the environment (especially noticeable when switching environment maps), since the surfaces now correctly respond to the background lighting from the surroundings of the scene.
The complete lesson source code is here.. In the next lesson we will finally deal with the second half of the expression of reflectivity, which is responsible for indirect mirror lighting. After this step, you will truly feel the power of the PBR approach to lighting.
Additional materials
Coding Labs: Physically based rendering : an introduction to the PBR model, along with an explanation of how to construct and what the irradiance map is for.
The Mathematics of Shading : ScratchAPixel's brief overview of some of the mathematical techniques used in this lesson, in particular, about polar coordinates and integrals.
PS : We have a telegram-konf to coordinate transfers. If there is a serious desire to help with the translation, then you are welcome!