Simulate the night vision of a person in a 3D game
Today we will deal with postprocessing images in DirectX.
As you know, in the dark human vision is provided by retinal wand cells, high light sensitivity of which is achieved due to loss of color sensitivity and visual acuity (although the rods in the retina are larger, they are distributed over a much larger area, so the total “resolution” is less).
All these effects can be seen most, looking up from the computer and going out at night on the street. ')
As a result, we get something like the following ( look at the whole screen! ):
Before: dull Polish shooter
After: IGF Finalist and E3 Laureate
Training
The first thing you need to decide what effect we want to achieve. I broke all the processing into the following parts:
Loss of color in low-light areas
Loss of clarity in the same place
Slight noise (ibid)
Loss of vision at long range in medium and low light
Implementation
We will write under Unity3D Pro, in the form of a shader for post-processing.
Before proceeding directly to the shader, we will write a small script that runs the screen buffer through this shader:
using UnityEngine; [ExecuteInEditMode] publicclassHumanEye : MonoBehaviour { public Shader Shader; publicfloat LuminanceThreshold; public Texture Noise; publicfloat NoiseAmount = 0.5f, NoiseScale = 2; private Camera mainCam; private Material material; privateconstint PASS_MAIN = 0; voidStart () { mainCam = camera; mainCam.depthTextureMode |= DepthTextureMode.DepthNormals; material = new Material (Shader); } voidOnRenderImage (RenderTexture source, RenderTexture destination) { material.SetFloat("_LuminanceThreshold", LuminanceThreshold); material.SetFloat ("_BlurDistance", 0.01f); material.SetFloat ("_CamDepth", mainCam.far); material.SetTexture ("_NoiseTex", Noise); material.SetFloat ("_Noise", NoiseAmount); material.SetFloat ("_NoiseScale", NoiseScale); material.SetVector("_Randomness", Random.insideUnitSphere); Graphics.Blit (source, destination, material, PASS_MAIN); } }
Here we just set the shader parameters to the user-defined ones and re-render the screen buffer through our shader.
Obtaining a “blurred” value is performed by the function blur (), which samples a few pixels next to ours and averages their values:
inline half4 blur(sampler2D tex, float2 uv, float dist){ #define BLUR_SAMPLE_COUNT 16 // float-! const float3 RAND_SAMPLES[16] = { float3(0.2196607,0.9032637,0.2254677), .... 14 .... float3(0.2448421,-0.1610962,0.1289366), }; half4 result = 0; for (int s = 0; s < BLUR_SAMPLE_COUNT; ++s) result += tex2D(tex, uv + RAND_SAMPLES[s].xy * dist); result /= BLUR_SAMPLE_COUNT; return result; }
The pixel darkness coefficient will be determined by the average brightness value for three channels. The coefficient is cut off by a given brightness limit value (LuminanceThreshold), i.e. all pixels brighter than this are considered “bright enough” to not process them.
The dependence of kLum on brightness will look like this:
The kLum values ​​for our scene look like this (white - 1, black - 0):
It is clearly seen here that bright areas (halo of lanterns and lighted grass) have a kLum equal to zero and our effect will not apply to them.
The distance from the screen surface to the pixel in meters can be obtained from the depth texture (depth texture, Z-buffer), which is clearly available for deferred rendering.
Here we select a random variable from the _NoiseTex texture (filled with Gaussian noise from Photoshop), using the _Randomness vector provided by the script, which will change on each frame.