RayTracingMaster.cs
script and a RayTracingMaster.cs
compute shader. Paste the following base code into the C # script:
using UnityEngine; public class RayTracingMaster : MonoBehaviour { public ComputeShader RayTracingShader; private RenderTexture _target; private void OnRenderImage(RenderTexture source, RenderTexture destination) { Render(destination); } private void Render(RenderTexture destination) { // Make sure we have a current render target InitRenderTexture(); // Set the target and dispatch the compute shader RayTracingShader.SetTexture(0, "Result", _target); int threadGroupsX = Mathf.CeilToInt(Screen.width / 8.0f); int threadGroupsY = Mathf.CeilToInt(Screen.height / 8.0f); RayTracingShader.Dispatch(0, threadGroupsX, threadGroupsY, 1); // Blit the result texture to the screen Graphics.Blit(_target, destination); } private void InitRenderTexture() { if (_target == null || _target.width != Screen.width || _target.height != Screen.height) { // Release render texture if we already have one if (_target != null) _target.Release(); // Get a render target for Ray Tracing _target = new RenderTexture(Screen.width, Screen.height, 0, RenderTextureFormat.ARGBFloat, RenderTextureReadWrite.Linear); _target.enableRandomWrite = true; _target.Create(); } } }
OnRenderImage
function OnRenderImage
automatically called by Unity after the camera has finished rendering. To render, we first need to create a target render (render target) with the appropriate sizes and report this to the compute shader. 0 is the index of the compute shader kernel function - we have only one.
[numthreads(8,8,1)]
, so we will stick to it and create one thread group for every 8 × 8 pixels. At the end we will write the result to the screen using Graphics.Blit
.
RayTracingMaster
component to the scene RayTracingMaster
(this is important when calling OnRenderImage
), assign a compute shader and start the play mode. You should see the output of the compute shader Unity template as a beautiful triangular fractal.
RayTracingMaster.cs
script:
private Camera _camera; private void Awake() { _camera = GetComponent<Camera>(); } private void SetShaderParameters() { RayTracingShader.SetMatrix("_CameraToWorld", _camera.cameraToWorldMatrix); RayTracingShader.SetMatrix("_CameraInverseProjection", _camera.projectionMatrix.inverse); }
SetShaderParameters
from OnRenderImage
.
Ray
and the function to construct. It should be noted that in HLSL, unlike C #, the declaration of a function or variable must be performed before they are used. For the center of each screen pixel, we calculate the source and direction of the beam, and display the latter as a color. Here is what the whole shader looks like:
#pragma kernel CSMain RWTexture2D<float4> Result; float4x4 _CameraToWorld; float4x4 _CameraInverseProjection; struct Ray { float3 origin; float3 direction; }; Ray CreateRay(float3 origin, float3 direction) { Ray ray; ray.origin = origin; ray.direction = direction; return ray; } Ray CreateCameraRay(float2 uv) { // Transform the camera origin to world space float3 origin = mul(_CameraToWorld, float4(0.0f, 0.0f, 0.0f, 1.0f)).xyz; // Invert the perspective projection of the view-space position float3 direction = mul(_CameraInverseProjection, float4(uv, 0.0f, 1.0f)).xyz; // Transform the direction from camera to world space and normalize direction = mul(_CameraToWorld, float4(direction, 0.0f)).xyz; direction = normalize(direction); return CreateRay(origin, direction); } [numthreads(8,8,1)] void CSMain (uint3 id : SV_DispatchThreadID) { // Get the dimensions of the RenderTexture uint width, height; Result.GetDimensions(width, height); // Transform pixel to [-1,1] range float2 uv = float2((id.xy + float2(0.5f, 0.5f)) / float2(width, height) * 2.0f - 1.0f); // Get a ray for the UVs Ray ray = CreateCameraRay(uv); // Write some colors Result[id.xy] = float4(ray.direction * 0.5f + 0.5f, 1.0f); }
public Texture SkyboxTexture
to the script, assign the texture in the inspector and set it in the shader by adding this line to the SetShaderParameters
function:
RayTracingShader.SetTexture(0, "_SkyboxTexture", SkyboxTexture);
Texture2D<float4> _SkyboxTexture; SamplerState sampler_SkyboxTexture; static const float PI = 3.14159265f;
CSMain
last part of CSMain
following:
// Sample the skybox and write it float theta = acos(ray.direction.y) / -PI; float phi = atan2(ray.direction.x, -ray.direction.z) / -PI * 0.5f; Result[id.xy] = _SkyboxTexture.SampleLevel(sampler_SkyboxTexture, float2(phi, theta), 0);
RayHit
:
struct RayHit { float3 position; float distance; float3 normal; }; RayHit CreateRayHit() { RayHit hit; hit.position = float3(0.0f, 0.0f, 0.0f); hit.distance = 1.#INF; hit.normal = float3(0.0f, 0.0f, 0.0f); return hit; }
RayHit bestHit
with the inout
qualifier to be able to modify the original struct. This is what the shader code looks like:
void IntersectGroundPlane(Ray ray, inout RayHit bestHit) { // Calculate distance along the ray where the ground plane is intersected float t = -ray.origin.y / ray.direction.y; if (t > 0 && t < bestHit.distance) { bestHit.distance = t; bestHit.position = ray.origin + t * ray.direction; bestHit.normal = float3(0.0f, 1.0f, 0.0f); } }
Trace
(as we expand it):
RayHit Trace(Ray ray) { RayHit bestHit = CreateRayHit(); IntersectGroundPlane(ray, bestHit); return bestHit; }
Ray
to inout
again - we will change it later when we talk about reflections. For debugging purposes, we will return the normal when colliding with the geometry, and otherwise return to the skybox sampling code:
float3 Shade(inout Ray ray, RayHit hit) { if (hit.distance < 1.#INF) { // Return the normal return hit.normal * 0.5f + 0.5f; } else { // Sample the skybox and write it float theta = acos(ray.direction.y) / -PI; float phi = atan2(ray.direction.x, -ray.direction.z) / -PI * 0.5f; return _SkyboxTexture.SampleLevel(sampler_SkyboxTexture, float2(phi, theta), 0).xyz; } }
CSMain
. Delete the skybox sampling code, if you have not already done so, and add the following lines to trace the ray and shade the collision:
// Trace and shade RayHit hit = Trace(ray); float3 result = Shade(ray, hit); Result[id.xy] = float4(result, 1);
p1 - p2
and the output point p1 + p2
. First we will check the entry point, and use the exit point if the other does not fit. In our case, the sphere is defined as the value float4
, consisting of the position (xyz) and the radius (w). Here is what the code looks like:
void IntersectSphere(Ray ray, inout RayHit bestHit, float4 sphere) { // Calculate distance along the ray where the sphere is intersected float3 d = ray.origin - sphere.xyz; float p1 = -dot(ray.direction, d); float p2sqr = p1 * p1 - dot(d, d) + sphere.w * sphere.w; if (p2sqr < 0) return; float p2 = sqrt(p2sqr); float t = p1 - p2 > 0 ? p1 - p2 : p1 + p2; if (t > 0 && t < bestHit.distance) { bestHit.distance = t; bestHit.position = ray.origin + t * ray.direction; bestHit.normal = normalize(bestHit.position - sphere.xyz); } }
Trace
, like this:
// Add a floating unit sphere IntersectSphere(ray, bestHit, float4(0, 3.0f, 0, 1.0f));
AddShader
call this shader AddShader
and check that there is a Shader "Hidden/AddShader"
in the first line. After Cull Off ZWrite Off ZTest Always
add Blend SrcAlpha OneMinusSrcAlpha
to enable alpha blending. Then replace the frag
function with the following lines:
float _Sample; float4 frag (v2f i) : SV_Target { return float4(tex2D(_MainTex, i.uv).rgb, 1.0f / (_Sample + 1.0f)); }
private uint _currentSample = 0; private Material _addMaterial;
InitRenderTexture
we need to reset _currentSamples = 0
and add the Update
function that recognizes the change in camera transformations:
private void Update() { if (transform.hasChanged) { _currentSample = 0; transform.hasChanged = false; } }
Render
function:
// Blit the result texture to the screen if (_addMaterial == null) _addMaterial = new Material(Shader.Find("Hidden/AddShader")); _addMaterial.SetFloat("_Sample", _currentSample); Graphics.Blit(_target, destination, _addMaterial); _currentSample++;
float2 _PixelOffset
and use it in CSMain
instead of the hard-coded offset float2(0.5f, 0.5f)
. Let's SetShaderParameters
back to the script and create a random offset by adding the following line to SetShaderParameters
:
RayTracingShader.SetVector("_PixelOffset", new Vector2(Random.value, Random.value));
float3 energy
variable to the ray and initialize it in the CreateRay
function as ray.energy = float3(1.0f, 1.0f, 1.0f)
. Initially, the beam will have maximum values in all color channels, which will decrease with each reflection.
Shade
function calls, but multiplied by the beam energy. For example, imagine that the beam was reflected once and lost its energy. Then it continues to move and collides with the sky, so we only transfer to the pixel energy of the sky. Modify CSMain
as follows, replacing previous calls to Trace
and Shade
:
// Trace and shade float3 result = float3(0, 0, 0); for (int i = 0; i < 8; i++) { RayHit hit = Trace(ray); result += ray.energy * Shade(ray, hit); if (!any(ray.energy)) break; }
Shade
function now also performs energy updating and generating a reflected beam, which is why inout
becomes important here. To refresh the energy, we perform elementwise multiplication by the reflected color of the surface. For example, for gold, the specular reflection coefficient is approximately equal to float3(1.0f, 0.78f, 0.34f)
, that is, it reflects 100% red, 78% green and only 34% blue, giving the reflection a characteristic golden hue. Be careful, none of these values should exceed 1, because otherwise the energy will be created from nowhere. In addition, reflectivity is often lower than one might think. For example, see some values on slide 64 in the article by Physics and Math of Shading by Netie Hofman.
Shade
feature looks like:
float3 Shade(inout Ray ray, RayHit hit) { if (hit.distance < 1.#INF) { float3 specular = float3(0.6f, 0.6f, 0.6f); // Reflect the ray and multiply energy with specular reflection ray.origin = hit.position + hit.normal * 0.001f; ray.direction = reflect(ray.direction, hit.normal); ray.energy *= specular; // Return nothing return float3(0.0f, 0.0f, 0.0f); } else { // Erase the ray's energy - the sky doesn't reflect anything ray.energy = 0.0f; // Sample the skybox and write it float theta = acos(ray.direction.y) / -PI; float phi = atan2(ray.direction.x, -ray.direction.z) / -PI * 0.5f; return _SkyboxTexture.SampleLevel(sampler_SkyboxTexture, float2(phi, theta), 0).xyz; } }
Trace
function. Put several spheres into the loop and the result will be as follows:
public Light DirectionalLight
to RayTracingMaster
and set a directional light source in the scene. You may also need to recognize changes to the light source transformations in the Update
function, as we did with the camera transformations. Now add the following lines to the SetShaderParameters
function:
Vector3 l = DirectionalLight.transform.forward; RayTracingShader.SetVector("_DirectionalLight", new Vector4(lx, ly, lz, DirectionalLight.intensity));
float4 _DirectionalLight
. In the Shade
function, determine the color of the albedo immediately after the specular color:
float3 albedo = float3(0.8f, 0.8f, 0.8f);
// Return a diffuse-shaded color return saturate(dot(hit.normal, _DirectionalLight.xyz) * -1) * _DirectionalLight.w * albedo;
// Shadow test ray bool shadow = false; Ray shadowRay = CreateRay(hit.position + hit.normal * 0.001f, -1 * _DirectionalLight.xyz); RayHit shadowHit = Trace(shadowRay); if (shadowHit.distance != 1.#INF) { return float3(0.0f, 0.0f, 0.0f); }
RayHit
structure in the shader. Instead of the global setting of material properties in the Shade
function, we will define them for each object and store them in RayHit
. Add float3 albedo
and float3 specular
to struct float3 albedo
and initialize them with float3(0.0f, 0.0f, 0.0f)
CreateRayHit
in CreateRayHit
. Also, change the Shade
function so that it uses these values from hit
instead of the hard-coded values. <
Sphere
in the shader and in the script in C #. From the side of the shader, it looks like this:
struct Sphere { float3 position; float radius; float3 albedo; float3 specular; };
IntersectSphere
function work with our struct, and not with float4
. This is easy to do:
void IntersectSphere(Ray ray, inout RayHit bestHit, Sphere sphere) { // Calculate distance along the ray where the sphere is intersected float3 d = ray.origin - sphere.position; float p1 = -dot(ray.direction, d); float p2sqr = p1 * p1 - dot(d, d) + sphere.radius * sphere.radius; if (p2sqr < 0) return; float p2 = sqrt(p2sqr); float t = p1 - p2 > 0 ? p1 - p2 : p1 + p2; if (t > 0 && t < bestHit.distance) { bestHit.distance = t; bestHit.position = ray.origin + t * ray.direction; bestHit.normal = normalize(bestHit.position - sphere.position); bestHit.albedo = sphere.albedo; bestHit.specular = sphere.specular; } }
bestHit.albedo
and bestHit.specular
in the IntersectGroundPlane
function to customize its material.
StructuredBuffer<Sphere> _Spheres
. In this place the CPU will store all the spheres that make up the scene. Remove all hard-coded spheres from the Trace
function and add the following lines:
// Trace spheres uint numSpheres, stride; _Spheres.GetDimensions(numSpheres, stride); for (uint i = 0; i < numSpheres; i++) IntersectSphere(ray, bestHit, _Spheres[i]);
public Vector2 SphereRadius = new Vector2(3.0f, 8.0f); public uint SpheresMax = 100; public float SpherePlacementRadius = 100.0f; private ComputeBuffer _sphereBuffer;
OnEnable
and free the buffer in OnDisable
. Thus, each time the component is turned on, a random scene will be generated.The function SetUpScene
will try to position the spheres in a certain radius and discard those that intersect existing ones. Half of the spheres are metallic (black albedo, colored specular), the other half is non-metallic (colored albedo, 4% specular):
private void OnEnable() { _currentSample = 0; SetUpScene(); } private void OnDisable() { if (_sphereBuffer != null) _sphereBuffer.Release(); } private void SetUpScene() { List<Sphere> spheres = new List<Sphere>(); // Add a number of random spheres for (int i = 0; i < SpheresMax; i++) { Sphere sphere = new Sphere(); // Radius and radius sphere.radius = SphereRadius.x + Random.value * (SphereRadius.y - SphereRadius.x); Vector2 randomPos = Random.insideUnitCircle * SpherePlacementRadius; sphere.position = new Vector3(randomPos.x, sphere.radius, randomPos.y); // Reject spheres that are intersecting others foreach (Sphere other in spheres) { float minDist = sphere.radius + other.radius; if (Vector3.SqrMagnitude(sphere.position - other.position) < minDist * minDist) goto SkipSphere; } // Albedo and specular color Color color = Random.ColorHSV(); bool metal = Random.value < 0.5f; sphere.albedo = metal ? Vector3.zero : new Vector3(color.r, color.g, color.b); sphere.specular = metal ? new Vector3(color.r, color.g, color.b) : Vector3.one * 0.04f; // Add the sphere to the list spheres.Add(sphere); SkipSphere: continue; } // Assign to compute buffer _sphereBuffer = new ComputeBuffer(spheres.Count, 40); _sphereBuffer.SetData(spheres); }
new ComputeBuffer(spheres.Count, 40)
is the step of our buffer, i.e. the size of one sphere in memory in bytes. To calculate it, let's calculate the number of float in a struct Sphere
and multiply it by the byte size of float (4 bytes). Finally, set the shader buffer in the function SetShaderParameters
:
RayTracingShader.SetBuffer(0, "_Spheres", _sphereBuffer);
Source: https://habr.com/ru/post/355018/