📜 ⬆️ ⬇️

Virtual cinematography for VR trailers



After creating trailers for Fantastic Contraption and Job Simulator in mixed reality, I wanted to dive a little deeper into virtual cinematography, removing the entire trailer in VR, rather than mixing live recording with virtual reality.

The idea of ​​shooting game avatars instead of the players themselves on a green background, we realized in a smaller reality in the trailer of Fantastic Contraption in a mixed reality . I wanted to develop this idea and see if we could record the whole trailer in this way, using different focal lengths and camera movements, as is done in traditional photography.

Mixed reality trailers look great, but to create high-quality videos in mixed reality you have to spend an incredible amount of time and money. Moreover, both from a technical and a creative point of view, they are very complex logistically. Shooting an avatar with a third-person view, similar to the usual shooting of actors, has many advantages and in many cases can be a much better solution to demonstrate the game / project of virtual reality.
')




Why first-person VR shooting is (usually) unsuccessful


There are reasons why films are not filmed in the first person. Seeing a record of actors playing their roles in front of our eyes, our brain reacts emotionally. We do not react as emotionally when we look at the world through the eyes of another person. When we do not see the bodies of the actors and how they fit into the environment, many nuances are lost. Virtual reality is no different in this, but, unfortunately, it is usually for first-person shooting that is used to show or advertise VR games. Such recordings become emotionally "flat" for many technical and creative reasons.

Typically, first-person recording is standard for the current generation of VR hardware and software, so many use it because of the ease of creation. In addition, most of the raw video from the first person in VR is difficult to watch. The brain and eyes of the player naturally neutralize hundreds of micro-movements of the head, so they seem smooth. But look , most of the first-person VR posts on Youtube is a chaotic jitter. The head of a person moves more than he is aware of, so 2D records look defocused and hard to watch.

If using first-person recordings is absolutely necessary, then create a separate camera for the helmet that receives its output and smoothes it for ease of viewing. We used such a camera for Job Simulator trailer and it worked out very well .

Ideally, it is necessary to ensure that the player in a virtual environment produces the strongest visual impact and the viewer becomes emotionally attached to him. This can be achieved by creating a trailer in a mixed reality or by removing the game avatar, depending on the needs and budget of the project.

Recording VR in the perspective of a third person allows you to create dynamic and interesting shots that capture the essence of the sensations from the gameplay. An excellent example is the Space Pirate Trainer. The way the game looks like in the first person does not mean that it is felt in the process. Look at the examples below. Third-person recording looks vibrant, cinematic, dynamic and emotionally attached to itself. First-person recording looks cumbersome, confusing and visually uninteresting. Shooting a player in the form of a third person in a mixed reality or recording a game avatar solves many such communicative problems.

Shooting from the third person, GIF
image

Third person camera


This post is more lively, cinematic and dynamic. The action is filmed in cinematic style, the camera monitors the player's reaction to the flying droid. She can predict the move to a large droid, creating an interesting and convincing frame.

First Person Shot, GIF


First person camera


It is difficult to understand what is happening in this video. The weapon and shield cover most of the frame. It is nekinematografichny, lifeless, it is not interesting to watch.

Here is an example from the Fantasy Contraption trailer for the Oculus Touch. Which movie looks more dynamic and compelling? Which movie better conveys what is happening in the game?

Shooting from the third person, GIF
image

Third person camera


This video feels dynamic and interesting to watch. The camera starts with a general plan, then approaches, precisely coinciding in time with the capture of the Neko wheel. This shows that the cat is used in the game as a toolbox. Then the camera moves on and focuses on the invention of the player, following the movements of his hands.

First Person Shot, GIF
image

First person camera


This video looks flat, boring, and most importantly, causes nausea due to the angle of the player's head. Turning the head in a helmet is completely natural for the player, but as a result, the 2D recording turns out to be uncomfortable. Some is lost in the green grass and it is not clear where the player takes the items.

Here is another example from the Space Pirate Trainer. The gameplay of this game is to move around the environment, shooting bullets flying from all angles into the player and dodging them. In the first-person view, it is almost impossible to convey how it is played, because there is no context for how the player moves around the environment.

Shooting from the third person, GIF
image

Third person camera


This video clearly conveys in-game events. A swarm of droids fires the player, and he dodges them in slow motion.

First Person Shot, GIF
image


First person camera


It is very difficult to say what the player physically does when fired. He dodges bullets, walking to the right across the game space, but visually it is not too obvious. It seems that the droids are shooting to the left and miss the player.

Why shoot a game avatar instead of shooting in mixed reality


Mixed reality trailers perfectly convey what the VR player feels. But creating a professional-looking trailer in a mixed reality takes a lot of money, time and resources. However, Owlchemy Labs mixed reality technology will help reduce complexity as soon as more users take over.

Shooting a game avatar has many advantages:

It is more cost effective: all you need is a wired / wireless third Vive controller , a gyrostabilizer (or even a cheap keyboard, like the one I used, see the image) and the developer time to create a game avatar with the appropriate style, as well as extra time to record a trailer in VR. You will get rid of all unnecessary costs for the purchase / lease / creation / lighting of the scene with a green background, as well as the team needed to perform such a voluminous shooting and post-processing.

image

To create the Space Pirate Trainer trailer, it was enough for me to ask someone to play the game and photograph it.

You have more time to experiment: when you work with 5-10 people waiting for your decisions, time is money. If you can rent a trailer for a smaller team in a larger number of days, then there is more time for experiments and research. Shooting all the necessary shots for the Fantastic Contraption trailers and the Space Pirate Trainer took us three days each.

Changing the script on the fly: when shooting in the studio you have very little space for experiments. You are under pressure of time and filming plans. We didn’t have that kind of stress, so we could take breaks, check the footage and see what worked during the shooting and what needed to change course.

It's amazing how seeming right in a helmet during a game looks terrible when shooting from a third person perspective. Frame overlap is important, and where the player is in the world can completely change the look and clarity of the frame. Even small details, such as the angle of rotation of the wrist, can significantly affect the natural appearance of the avatar. Some movements can spoil the inverse kinematics of animation, so it is important to understand how and where to place the body. Therefore, the simultaneous control of filming from both the player and the operator greatly helps to improve the recording.

Actor's Activity / Fatigue: Many people don’t think about it, but it’s tiring to play VR. Especially in games like Space Pirate Trainer. It is very difficult to constantly maintain the high quality of the player’s work when recording a game with so many random elements. She exhausting after 30-45 minutes of gameplay. Due to the absence of a rigid time frame for the filming process, all participants can relax, take breaks and approach the whole process as a whole more relaxed.

It is incredibly amazing how much an avatar with only three data transfer points (one on the head and two on the hands) allows you to strongly convey emotions. If the player is tired or emotionally not ready to work, it is very noticeable when recording. When shooting an actor on the stage for a trailer in mixed reality, fatigue increases tenfold and requires professional actors to maintain a high level of play for several hours in a row.

The character fits perfectly into the game world: creating a costume similar to the Space Pirate Trainer game avatar, which allows the viewer to feel like a part of the game world, spends an incredible amount of time and money. If we were shooting our trailer in a mixed reality, then a player in ordinary jeans and a T-shirt would have seemed out of place among all these futuristic robots. We would need to dress him in a costume, but even in this case, we would not be able to get closer to what can be easily achieved in the game itself, because connecting a live actor with a gaming environment never looks as natural as an avatar, which is part of the game world. .

Variability is very important: watching a game from one point of view (of a player) quickly becomes boring. To create a trailer or video that is interesting to watch, we need to do several takes from different angles to give the viewer an understanding of what he is looking at and how it fits into the game world. This is almost impossible if you are limited to only first-person recording with a view from the player’s eyes. For games with fast and rich gameplay, displaying from different points of view is the only way to correctly visualize the action.

Scale in Fantastic Contraption


One of the aspects that we wanted to emphasize in the version of Fantastic Contraption for Oculus touch is the established scale of the game. When watching a game from the first person, it is quite easy to get confused in the real sizes of game inventions. Shooting avatars from a third party allowed the viewer to show how high he is in the context of the rest of the game world.

Shooting from the third person, GIF
image

This entry clearly shows the dimensions of the invention in the context of the game. You can see how small it is compared to the player. The camera focuses the viewer's attention on the top of the frame where the invention is being sent. Emotions of the player when the invention reaches the goal are also well understood.

First Person Shot, GIF
image

First person camera


Until a controller appears at the end of a clip in a frame, it is almost impossible to understand the dimensions of the true dimensions of the invention. It can be very large and far away from the player. Because of the tilts and head movements, this record is hard to watch. In addition, it is impossible to understand the emotions of the player, despite the fact that this is the same record, only from the first person.

Simultaneous shooting of versions for PSVR and Oculus


Another minor issue with Fantastic Contraption was that we wanted to use this trailer for both the new PSVR and Oculus. But how can we remove two trailers at the same time, if the game control of these platforms is very different? Lindsay Jorgensen came up with a terrific solution: divide the screen into rectangles and render only Oculus controllers in rectangles from the first and third person, and PSVR controllers in the other two rectangles.

Rendering, GIF
image

It worked surprisingly well, and 80% of the entries required only a simple frame replacement for the display of the PSVR version. We had to remove in the PSVR version all references to the shooting in the room. Therefore, the only big piece that required a complete re-recording was the moment Pegas built the first invention. In the other frames, minor changes were also made, but on the whole, this solution suited us perfectly.

Of course, we removed both trailers using VIVE and other ways to accomplish this. We needed to track the position in the operator and player room. Currently, there is no way to do this only with Oculus or PSVR.

Rig character Space Pirate


The team from i-illusions created a game avatar and used a rig (snap) with inverse kinematics to control the character (including legs) according to the position of the display and controllers. It was so successful that when editing records, some parts looked completely similar to motion capture.


Now the biggest problem with the rig is that the legs do not always look natural. If someone releases a whole body tracking system, including points for legs, knees, wrists, and elbows (along with Unity / Unreal plug-in for interpreting data), it will be much easier to implement in-game recording, and the quality of motion capture can be just as good as in the ILM iMocap system, one shot in which costs more than my house. I feel that standardizing such a system for animating VR avatars is just a matter of time.

But until it became a reality, we solved the problem by shooting the avatar mainly from the wrists and above to avoid problems with the display of the model below the wrists. We show avatars in full growth only on general plans.

Frames from the trailer, gif
image


Camera control


Then we needed to add camera controls. We used a wireless third VIVE controller that acted as an in-game camera. I took this camera, connected it with my cheap camera, and we started shooting. The game has a slider that changes the field of view, so we could shoot with a wide-angle or telephoto lens, as well as smooth the control of position and rotation. In addition, we could turn off the display of indicators of points and lives, the boundaries of the floor and game advertising stations, so that there were no distracting elements on the screen. The screen was divided into two rectangles, one above the other, so we had a smoothed first-person view. But as a result, we never used these first-person accounts.

Droid camera


Gif
image

Dirk also added the ability to attach a camera to one of the game droids. Giving sufficient smoothing, we were able to “free” shoot panoramas of the entire virtual environment, almost without interrupting the process.

XBOX Controller Camera


Gif
image

We also had the ability to move the camera from a third party to an ordinary XBOX controller. This allowed us to place cameras where it’s impossible to physically reach and make the cinematographic general plans of the player surrounded.

Experiments and getting good shots


This video shows how we took shots "over the shoulder" and demonstrates how much freedom we have in experiments with different ideas. We wanted to get a frame in the style of first-person shooter, for example, with a change of weapons, but I had problems with shooting my friend Vince, because he waved his hands and hold the weapon in the frame at such a long focal length was almost impossible.

Therefore, Vince thought of taking the steadicans in his free hand and pressing him against the body. He took the camera in his hand, I just shifted the virtual camera to the desired position and angle, and we had an excellent shot, taken “over the shoulder” of the player!


Comparing mixed reality and gaming avatars


I am very inspired by the prospects of such virtual cinematography in VR. This technique is not new - James Cameron often used virtual cinematography when filming Avatar. Then this technique has been greatly improved when working on the "Jungle Book" , but the equipment used there costs hundreds of thousands of dollars and is much more difficult to use. Now we can do almost the same thing, just buying VIVE, another controller, and spending a bit of developer time. It is incredible that, in principle, we can shoot virtual games in real time just in the basement! The next step is to use multiple virtual cameras so that different people can shoot at the same time.

Mixed reality is also an amazing technique and work tool. However, I believe that in any project you should consider all the possibilities and choose what is best suited to a particular game. Take, for example, Rec Room - the whole game has a cartoon style, so a living person inside the environment will look alien compared to game avatars. But, for example, in Google Earth there is no such abstraction . There it makes sense to place in the world of a real person (even if frames of mixed reality will be simulated).

As is the case with the trailers of ordinary games , each VR game requires its own approach and there are no universal solutions. Think what is best for your game / project and create the most interesting, exciting and convincing result.

Source: https://habr.com/ru/post/320702/


All Articles