📜 ⬆️ ⬇️

The neural network generates real-time video game character movements.

image

Creating a real-time controller for virtual characters is a challenge even with a large amount of available high-quality motion capture data.

This is partly due to the fact that a lot of requirements are presented to the character controller, and only if it meets all of them can it be useful. The controller should be able to learn from large amounts of data, but it does not require a large amount of manual preprocessing of data, and should work as quickly as possible and not require large amounts of memory.
')
And although some progress has already been made in this area, almost all existing approaches meet one or more of these requirements, but do not satisfy them all. In addition, if the projected area will have a relief with a large number of obstacles, this complicates matters even more seriously. The character has to change the pace of movement, jump, dodge, or climb hills, following the user's commands.

In such a scenario, a system is needed that can learn from a very large amount of motion data, since there are so many different combinations of motion paths and corresponding geometries.

Developments in the field of in-depth learning of neural networks can potentially solve this problem: they can learn from large data sets, and once trained, they take up little memory and quickly complete tasks. It remains an open question how exactly neural networks are best applied to motion data in such a way as to obtain high-quality results in real time with minimal data processing.

Researchers at the University of Edinburgh have developed a new learning system called the phase-functional neural network (PFNN), which uses machine learning to animate characters in video games and other applications.

image

Selection of results using PFNN for crossing uneven terrain: the character automatically moves in accordance with user control in real time and the geometry of the environment.


Ubisoft Montreal researcher and project lead researcher Daniel Holden described PFNN as a learning framework that is suitable for creating cyclical behaviors, such as human movement. He and his team are also developing network input and output parameters for controlling characters in real time in difficult conditions with detailed user interaction.

image

Visual PFNN diagram. The figure in yellow shows the cyclic phase function - a function that generates the weights of the regression network that performs the test task.

Despite its compact structure, the network can learn from a large amount of large data due to its phase function, which gradually changes over time to create a wide variety of network configurations.

image

Visualization of the input parameterization of the system. The pink color represents the position and speed of the joints of the character from the previous frame. Black describes subsample positions of the trajectory, direction and height. Yellow highlighted character grid, deformed using the positions of the joints and rotations, derived from the system PFNN.

The researchers also offer a structure for obtaining additional data for PFNN training, where human movement and environmental geometry are interrelated. They claim that after learning, the system is fast and requires little memory - it needs a few milliseconds of time and megabytes of memory, even when learning movement data on gigabytes. In addition, PFNN produces high-quality motion without the artifacts that can be found in existing methods.

PFNN learns end-to-end on a large dataset consisting of walking, running, jumping, climbing, which are embedded in virtual environments. The system is able to automatically generate movements in which the character adapts to various geometric conditions such as walking and running over rough terrain, jumping over obstacles and squats in structures with low ceilings.

image
The PFNN system goes through three successive stages: the preprocessing stage, the training stage, and the execution stage. At the preprocessing stage, the data for preparing the neural network is configured in such a way that it is possible to automatically extract the control parameters that the user will later provide. This process involves installing elevation data for captured movement data using a separate elevation database.

At the learning stage, PFNN learns to use this data to create character movement in each frame, taking into account the control parameter. At the execution stage, the input parameters in the neural network are collected from user input and from the environment, and then entered into the system to determine the character's movement.

This control mechanism is ideal for working with characters in interactive scenes in video games and virtual reality systems. Researchers said that if you train a network with a non-cyclic phase function, you can easily use PFNN for other tasks, such as modeling blows and kicks.

A team of researchers led by Holden plans to present this new neural network at SIGGRAPH conferences in August.

→ Project Page

Source: https://habr.com/ru/post/373431/


All Articles