⬆️ ⬇️

How virtual reality came to the project on Unity

"Fashion trends bring new problems to life make change" Probably it was from this thought that the decision was made to connect to the project on Unity, the virtual reality helmet, the well-known Oculus Rift DK2. Contrary to the harsh probing of the financial bottom by the ruble, we managed to order the Oculus Rift with delivery to St. Petersburg at an adequate price. Quickly, in less than two weeks, the order arrived at the walls of our office.







Overall impression



In the box, as expected, the helmet itself was lying, a set of necessary cables, 2 sets of lenses and a helmet positioning camera in space. After unpacking and connecting to the test computer, the first oddity was immediately revealed. The helmet refused to work in Direct Display mode, feeling great in the second monitor mode. Moreover, this feature was observed only on the test computer. As a solution, a lot of adequate and not very decisions were made in the form of reinstalling drivers, installing missing Microsoft Visual C ++ Redistributable and other “necessary” applications and libraries. After reinstalling Windows, the helmet still only worked in the advanced display mode. But a wise colleague installed all the Windows updates available at that time on the test computer, for which many thanks to him. And one, but the most necessary, out of more than a thousand installed updates solved the problem, the helmet started working in Direct Mode.



At last, it was possible to get down to the delicious - opportunity testing games . The first impression is “WOW”. The brain actively claimed that everything is real and can even be touched. Words can not describe it, it is better to try.

')

Project Integration



Let's skip the lyrics, it's time to get down to serious things - the integration of the helmet into the project on the Unity engine.

The first step is to download the official package of Oculus Unity 4 Integration , the most current version. The developers of the package really want to say, thank you, the player’s prefab is done perfectly, with a few clicks you can immerse yourself in the virtual reality of your project. Only here the image and the definition of the position and turn the head for a full-fledged project is not enough, you need to do a few things:



Getting to the implementation as the source prefab was taken - OvrCameraRig, located in the official package to Unity.



User Interface Display


After experiments and reworking the entire interface used, and there is plenty of it, the most optimal direction is chosen - to receive an image from the interface camera into a texture, then display it in front of the player. Adding a new class, responsible for the integration of the helmet in the project. The first lines of code appeared in it, which allows you to find the desired camera by the interface layer mask and get an image from it.



[SerializeField] private string guiLayerName; [SerializeField] private string guiLayerPlaneName; [SerializeField] private Color backgroundColor = new Color(0, 0, 0, 0.5f); [SerializeField] private GameObject guiPlanePrefab = null; private RenderTexture _guiRenderTexture = null; private RenderTexture guiRenderTexture { get { if (null == _guiRenderTexture) { _guiRenderTexture = new RenderTexture(Screen.width, Screen.height, 0); } return _guiRenderTexture; } } private Transform centerEyeAnchor = null; private Camera guiCamera = null; private int guiLayer = 0; private int guiLayerPlane = 0; private GameObject guiPlane = null; private void Start() { guiLayer = LayerMask.NameToLayer(guiLayerName); guiLayerPlane = LayerMask.NameToLayer(guiLayerPlaneName); guiCamera = NGUITools.FindCameraForLayer(guiLayer); centerEyeAnchor = GetComponent<OVRCameraRig>().centerEyeAnchor; if (null != guiCamera) { guiRootPanel = guiCamera.GetComponentInParent<UIPanel>(); guiCamera.targetTexture = guiRenderTexture; if (null != guiPlanePrefab) { guiPlane = Instantiate(guiPlanePrefab) as GameObject; guiPlane.layer = guiLayerPlane; guiPlane.renderer.material.mainTexture = guiRenderTexture; Vector3 ls = guiPlane.transform.lossyScale; Vector3 lp = guiPlane.transform.position; Quaternion lr = guiPlane.transform.rotation; guiPlane.transform.parent = transform; guiPlane.transform.localScale = ls; guiPlane.transform.localPosition = lp; guiPlane.transform.localRotation = lr; } } else { throw new UnityException(string.Format("Camera for layer {0} not found", guiLayer)); } } private void OnGUI() { RenderTexture previousActive = RenderTexture.active; RenderTexture.active = guiRenderTexture; GL.Clear(false, true, backgroundColor ); RenderTexture.active = previousActive; guiCamera.Render(); } 


Initially, the picture with the interface was displayed on a regular rectangular plane, but when using a helmet, discomfort arose. Were tried various options for the shape of the surface, the most pleasing to the eye was a curved plane, similar to modern curved TVs. The plane with the necessary values ​​was moved to a separate prefab, but you can not do this and skip a section of the code with the position of the plane. I recommend to choose such parameters of the position of the plane in front of the player, so that the user, by bringing his head closer, could not see what the plane looks like from behind, but not too far, so that in case you can bring your head closer and read what is written. As a result, when you start it turned out about this picture.







To display the plane with the image of the menu on top of all objects of the environment, you need to google to write a shader that will always draw itself on top of all. The manual for writing shaders to the Unity engine in the article “ShaderLab syntax: Culling & Depth Testing” describes what to add to the shader pass the ZTest Always parameter and there will be happiness the shader will draw as planned. Choosing the first unshadowed shader, I used the shader that came with NGUI, copy it, give it a new name, and add the ZTest parameter.



 Pass { Cull Off Lighting On ZWrite Off ZTest Always Fog { Mode Off } Offset 0, -1 Blend SrcAlpha OneMinusSrcAlpha CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" ... } 


The view from the editor allows you to see that even though the plane crosses the wall of the building, the image is still drawn last.







And so it looks in a helmet:







I liked the result of the work, both in terms of resources and in appearance. Throwing aside virtual reality, you can pour yourself another cup of coffee and chat with colleagues about being worldly.



Cursor


“Where does all this why is unclear. I would sit drinking coffee, look at a flat monitor, I don’t really need this virtual reality. Although I am deceiving whom, of course, I need "

I never thought that it would be difficult to display the cursor, but for the helmet it turned out to be a very interesting task. The first and most obvious thing is to draw the cursor on a normal interface and display it on the plane in front of the player.

UIPanel , UISprite or UITexture , it turns out the cursor. Beautiful, elegant, simple. But the helmet is completely different. Moving the cursor with the mouse - moving, pointing at the interface element - responds, great, I can't even believe it. We hover the cursor on an empty menu area, look into space and the brain tries to focus on the object in space, but some kind of fly on the glass, the cursor, prevents it from doing, either the cursor splits, or the space ahead. Of course, you can make the menu disappear and appear at the request of the user. This is done by adding a couple of lines of code and several additional properties.



 [SerializeField] private KeyCode showMenuKey = KeyCode.None; [SerializeField] private float displayTime = 0.61f; private float alphaCalculate = 1; private UIPanel guiRootPanel = null; private Start() { … guiRootPanel = guiCamera.GetComponentInParent<UIPanel>(); … } private void OnGUI() { RenderTexture previousActive = RenderTexture.active; RenderTexture.active = guiRenderTexture; if (null != guiRootPanel) guiRootPanel.alpha = alphaCalculate; Color color = backgroundColor * new Color(1, 1, 1, alphaCalculate); GL.Clear(false, true, color); RenderTexture.active = previousActive; guiCamera.Render(); } private void LateUpdate() { if (Input.GetKeyDown(showMenuKey)) { menuIsShow = !menuIsShow; StopCoroutine("LerpAlpha"); StartCoroutine(“LerpAlpha”, (menuIsShow ? 1 : 0)); } } private IEnumerator LerpAlpha(float endAlpha) { float t = 0; float time = Mathf.Abs(endAlpha - alphaCalculate) / displayTime; while (t < time) { t += Time.deltaTime; alphaCalculate = Mathf.Lerp(alphaCalculate, endAlpha, t / time); yield return null; } } 


Perhaps this will be enough in those projects where the menu is not used in the game world. But the current project meant a different use of the menu and the search for solutions continued.

The idea of ​​a “laser pointer” was tested. Attach a new object to the player with a light source, set the following parameters.







And the cat's dream, the glowing point, moves through the virtual world. In the helmet, the eyes can not rejoice. Point exactly on the object where it points and no discomfort. Having played enough with the “laser pointer” in the virtual world, I return to the real one and I understand that the decision is not entirely suitable. Replacing the light source on the spotlight turns a beautiful cursor, it can be not only a luminous point, but also any picture that is available.







Plus, the following shader must be inserted into the spotlight material:



 Shader"Projector/Additive"{ Properties{ _ShadowTex("Cookie",2D)=""{TexGenObjectLinear} } Subshader{ Pass{ CullBack ZWriteOff Color[_Color] ColorMaskRGB BlendSrcAlphaOneMinusSrcAlpha Offset0,0 SetTexture[_ShadowTex]{ constantColor(1,1,1,1) combinetexture*constant,texture Matrix[_Projector] } } } } 


The searchlight has a small limitation, it is invisible on the skybox. To correct this feature, I decided to do the following:

  1. added a new child object to the searchlight;
  2. On this object added components: MeshRenderer , Mesh ;
  3. chose a non-illuminated material with a cursor;
  4. set according to the length and size of the spotlight.


This result was the result:











As a result, there is a cursor that is visible in space, but not visible in the menu and vice versa, is in the menu, but causes discomfort when it is displayed in space.



There was an idea to write for the cameras a shader drawing a picture of the cursor for the eye with a shift to the nose, depending on ZDepth. But the idea ended on the fact that my knowledge in the field of writing shaders is limited, and I cannot imagine how to do this. Maybe someone in the comments will suggest how to implement this idea.



Leaving everything as it is, I took up the definition of the position of the cursor. There is a cursor position according to the mouse pointer, there are values ​​of rotation and position of the head. How to most adequately manage the cursor is completely unclear. “Truth is born in a dispute.” After a brief discussion with colleagues, their opinions were divided, one half said that moving the cursor is better with the mouse, the other - better with the head. "It is better to try once and then discuss."



The first option is easier to make for the cursor in the menu.

The position of the Input.mousePosition cursor is translated into coordinates and the cursor moves according to these coordinates.

The second option is well suited for the cursor in space.

Searchlight to make a child to look, and the cursor is now controlled by the head.

The result, the ability to use two cursors, one is controlled by the mouse and used in the menu, the other is controlled by the head and used in space. It seems not bad, I save each cursor into separate prefabs for dynamic creation and use in the future and add the following code to the helmet integration class.



 [SerializeField] private GameObject cursor3dPrefab = null; [SerializeField] private GameObject cursor2dPrefab = null; private GameObject cursor3d = null; private GameObject cursor2d = null; private Start() { ... if (null != cursor3dPrefab) { cursor3d = Instantiate(cursor3dPrefab) as GameObject; cursor3d.transform.parent = centerEyeAnchor; cursor3d.transform.localPosition = Vector3.zero; cursor3d.transform.localRotation = Quaternion.identity; cursor3d.transform.localScale = Vector3.one; } if (null != cursor2dPrefab) { cursor2d = Instantiate(cursor2dPrefab) as GameObject; cursor2d.transform.parent = guiCamera.transform; cursor2d.transform.localPosition = Vector3.zero; cursor2d.transform.localRotation = Quaternion.identity; cursor2d.transform.localScale = Vector3.one; UITexture texture = cursor2d.GetComponentInChildren<UITexture>(); if (null != texture) cursor2d = texture.gameObject; } ... } 


But two different cursors, and even that are controlled differently, it seemed to me careless execution.



ScreenPointToRay for Raycast


“That we should build a house, we will draw life”

Half way traveled. It is necessary to calculate and return the beam. One very good idea, instantly, visited my poor head, it is a pity it happens not regularly. You must write an extension for the camera that had the following call, Camera.main.ExternalScreenPointToRay, and returned a new beam. This requires the code:



 public static class ExternalCamera { public static RayExternalScreenPointToRay(thisCameracamera,Vector3position){ return camera.ScreenPointToRay(position); } } 


Added static flag about the ability to use the calculations of the position of the helmet.



 public static bool useOVR { get; private set; } 


As well as adding a static reference to an instance of a class.



 public static ExtensionOVR instance { get; private set; } 


Do not forget to assign values ​​to them in the Start () function



 instance = this; useOVR = true; 


Such a singleton turned out.



To switch by the mode of calculating the value of the beam, I created the following enumeration. Oh, and I love to do it, to produce transfers.



 public enum CameraRay { Head, Cursor } 


As a starting point, I decided to take the following condition: if the camera is three-dimensional, the beam is where the spotlight is directed, if two-dimensional, the beam is the position where the spotlight intersects and the plane on which the render menu is displayed. Those. calculations will take about the following form:



 public Ray ScreenPointToRay(Camera camera, Vector2 position) { return camera.orthographic ? guiPointToRay : headPointToRay; } 


And the camera extension code will look like this:



 public static Ray ExternalScreenPointToRay(this Camera camera, Vector3 position) { return ExtensionOVR.useOVR ? ExtensionOVR.instance.ScreenPointToRay(camera, position) : camera.ScreenPointToRay(position); } 


To calculate the beam of a three-dimensional cursor, everything is extremely clear:



 private Ray headPointToRay { get { return (cameraRay == CameraRay.Cursor && null != cursor3d) ? new Ray(cursor3d.transform.position, cursor3d.transform.forward) : new Ray(centerEyeAnchor.position, centerEyeAnchor.forward); } } 


To calculate the position of the two-dimensional beam, it is necessary to find the intersection of the searchlight and the plane. This is easy to calculate using RaycastHit.textureCoord . Mesh Collider is added to the plane beforehand and it is highlighted in a separate layer.



 public Vector2 cursorPosition { get; private set; } private Ray guiPointToRay { get { RaycastHit hit; if (Physics.Raycast(headPointToRay, out hit, 1000, 1 << guiLayerPlane)) { cursorPosition = new Vector2(hit.textureCoord.x * Screen.width, hit.textureCoord.y * Screen.height); return guiCamera.ScreenPointToRay(cursorPosition); } else { return new Ray(); } } } 


Slightly added change the cursor position according to the selected mode in the Update () function.



 if (cameraRay == CameraRay.Cursor && null != cursor3d) { Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition); cursor3d.transform.LookAt(cursor3d.transform.position + ray.direction); } if (null != cursor2d) { cursor2d.transform.localPosition = cursorPositionOffset; } 


Well, and another property for convenience:



 public Vector2 cursorPositionOffset { get { return cursorPosition + offsetCursor; } } 


Now replacing all calls of ScreenPointToRay with ExternalScreenPointToRay, the cursor synchronously moves in the menu and in space, to watch one sight. True, there is a small minus. The cursor is now visible on the plane and in space at the same time. Slightly transforming the code of the Shader plane with the image of the interface, remove the translucency.



 v2f vert (appdata_t v) { ... o.color.a = v.color.a > 0 ? 3 : 0; ... } 


And the final stroke, in the interface where the cursor should be displayed, the collider hangs up and checking whether the two-dimensional cursor is over the interface collider display it, or hide it otherwise.



 public bool enable2DCursor { get { return Physics.Raycast(guiCamera.ScreenPointToRay(cursorPosition), float.MaxValue, 1 << guiLayer); } } private void Update() { ... if (enable2DCursor) { cursor2d.transform.localPosition = cursorPositionOffset; } else { cursor2d.transform.localPosition = Vector3.up * 10000; } ... } 


That's all, there is the ability to navigate through the menu, it is possible to display the cursor where the vision will focus, it is possible to calculate the ray for Raycast.



Total



What I want to give recommendations for optimizing the project for the virtual world based on the experience gained.



PS Probably some will think that this is nonsense and why for the spatial cursor I did not use a cheaper method with raycast and drawing a billboard directed towards the player with calculation of its size relative to the distance. On the stage in the project used, colliders are not everywhere and some colliders do not correspond to the size of the object and as a result it turns out that the cursor sometimes hangs incomprehensibly where and how.

Source: https://habr.com/ru/post/249053/



All Articles