📜 ⬆️ ⬇️

Custom gestures, Kinect + Unity. Part 2

We continue our tutorial on the use of custom gestures in the Kinect + Unity bundle. In the first part, we looked at the process of learning gestures, with the result that we have a trained model in the form of a .gdb file. Today we will use this model in Unity.


Setup Kinect + Unity


Create a project in Unity, add the downloaded Kinect packages: Assets -> Import package -> Custom package ..., choose Kinect.VisualGestureBuilder.2.0.1410.19000.unitypackage and Kinect.2.0.1410.19000.unitypackage (versions may vary). There may be a problem with the fact that some files in these packages are the same, Unity adds both files with the names File.cs and File 1.cs, in this case simply delete all files with index 1 (the list will be in error messages).
The structure of an empty project with added packages:


Run the finished example to make sure everything works. In the downloaded package for Unity there are two examples: GreenScreen and KinectView. From the KinectView example, add two folders “Materials” and “Scripts” and the file “MainScene.unity”. Open the scene (MainScene.unity) and run. You should get something like this:


If we have achieved such a result, then everything works for us, we can proceed to the main part.
')
First we will make sure that a message is output to the log by triggering a gesture.
From the Kinect SDK we need:

In addition, we need two classes for processing received frames from Kinect: VisualGestureBuilderFrameReader and BodyFrameReader .

Uploading gestures to Unity


It is understood that we have at this stage an empty Unity project with imported Kinect packages, as in the example above.
Create an empty object (GameObject -> Create Empty), let's call it KinectManager. Add a new script to our object with the name KinectManagerScript (in this tutorial only scripts written in C # are considered). Open the script, add using'i:

using Microsoft.Kinect.VisualGestureBuilder; using Microsoft.Kinect; 

Inside the class, declare objects of the above mentioned classes:
 VisualGestureBuilderDatabase _dbGestures; Windows.Kinect.KinectSensor _kinect; VisualGestureBuilderFrameSource _gestureFrameSource; Windows.Kinect.BodyFrameSource _bodyFrameSource; VisualGestureBuilderFrameReader _gestureFrameReader; Windows.Kinect.BodyFrameReader _bodyFrameReader; Gesture _swipeUpDown; //   Windows.Kinect.Body[] _bodies; //  ,  Kinect' Windows.Kinect.Body _currentBody = null; // ,     public string _getsureBasePath = "upDown.gbd"; //     

Now we need to initialize the values ​​and add event handlers. Create the InitKinect () method and call it in the Start () method. First, let's download gestures from our model:
 void InitKinect() { _dbGestures = VisualGestureBuilderDatabase.Create(_getsureBasePath); _bodies = new Windows.Kinect.Body[6]; _kinect = Windows.Kinect.KinectSensor.GetDefault(); _kinect.Open(); _gestureFrameSource = VisualGestureBuilderFrameSource.Create(_kinect, 0); foreach (Gesture gest in _dbGestures.AvailableGestures) { if (gest.Name == "UpDownSwipe_Right") { _gestureFrameSource.AddGesture(gest); _swipeUpDown = gest; Debug.Log("Added:" + gest.Name); } } } 

Note: we encountered a problem when Kinect does not turn on instantly, and, therefore, the Open () method did not work. It was treated by periodically checking the IsAvailable flag in Update and opening it in case Kinect is available.

We start the application, and if everything is fine, then we will see the following in the log:


This means that we successfully opened and uploaded gestures (in our case, one) from our trained model.

Gesture detection


Obviously, gestures need to be detected from a person, so we need to get a list of all the people who see Kinect. Initialize the _bodyFrameSource object and _bodyFrameReader and add the event “frame came” event (do all this in InitKinect):
 _bodyFrameSource = _kinect.BodyFrameSource; _bodyFrameReader = _bodyFrameSource.OpenReader(); _bodyFrameReader.FrameArrived += _bodyFrameReader_FrameArrived; 

Similarly, we initialize _gestureFrameSource, pause and add an event handler:
 _gestureFrameReader = _gestureFrameSource.OpenReader(); _gestureFrameReader.IsPaused = true; _gestureFrameReader.FrameArrived += _gestureFrameReader_FrameArrived; 

In the _bodyFrameReader_FrameArrived handler, we want to get information about people and, if there is someone in the frame, select the first person (for simplicity), whose gestures will be tracked.
Method source code
 void _bodyFrameReader_FrameArrived(object sender, Windows.Kinect.BodyFrameArrivedEventArgs args) { var frame = args.FrameReference; using (var multiSourceFrame = frame.AcquireFrame()) { multiSourceFrame.GetAndRefreshBodyData(_bodies); //     _currentBody = null; foreach (var body in _bodies) { if (body != null && body.IsTracked) { _currentBody = body; //       break; } } if (_currentBody != null) { Debug.Log("_currentBody is not null"); } else { Debug.Log("_currentBody is null"); } } } 


Run the application. When Kinect sees at least one person, we will see "_currentBody is not null", if there is no one in the frame - "_currentBody is null". Example log:


We were looking for an active person in order to recognize his gestures. If we find a person, save their id in _gestureFrameSource and remove from pause. Our condition will change to the following:
 if (_currentBody != null) { Debug.Log("_currentBody is not null"); _gestureFrameSource.TrackingId = _currentBody.TrackingId; _gestureFrameReader.IsPaused = false; } else { Debug.Log("_currentBody is null"); _gestureFrameSource.TrackingId = 0; _gestureFrameReader.IsPaused = true; } 

The last thing we need from Kinect is the handler for gestures directly. We check the validity of our person id, we get the current frame as we did in _bodyFrameReader_FrameArrived:
 if (_gestureFrameSource.IsTrackingIdValid) { Debug.Log("Tracking id is valid, value = " + _gestureFrameSource.TrackingId); using (var frame = args.FrameReference.AcquireFrame()) { if (frame != null) { /*…*/ } } } 

We get the current result recognition discrete gestures:
 var results = frame.DiscreteGestureResults; 

if there are any gestures, look, our gesture or not:
 if (results != null && results.Count > 0) { DiscreteGestureResult swipeUpDownResult; results.TryGetValue(_swipeUpDown, out swipeUpDownResult); Debug.Log("Result not null"); if (swipeUpDownResult.Confidence > 0.1) { Debug.Log("Up Down Gesture"); } } 

The threshold value of Confidence depends on the quality of your training, that is, it is such a value, below which is noise, and above which is a gesture (remember the herringbone from the first part), it is chosen empirically :)

Having launched the application, we will see that multiple detection is performed on one gesture. This is due to the fact that the results for each frame are independent, and since the gesture is not performed instantly (several frames), we get a positive result for each of the frames. You can eliminate this by a logical flag:
 bool gestureDetected = false; … if (swipeUpDownResult.Confidence > 0.1) { if (!gestureDetected) { gestureDetected = true; Debug.Log("Up Down Gesture"); } } else { gestureDetected = false; } 

Using gestures


The last thing we have to do with KinectManagerScript is to generate an event when we saw the gesture. We declare:
 public delegate void SimpleEvent(); public static event SimpleEvent OnSwipeUpDown; 

and when they found the gesture:
 if (!gestureDetected) { gestureDetected = true; Debug.Log("Up Down Gesture"); if (OnSwipeUpDown!= null) OnSwipeUpDown(); } 

This concludes with KinectManagerScript. Let's go back to our scene and create a sphere. Let's call “MainSphere”, add a new script “MainSphereScript”. In the Start () method, we will create an event handler, check with the help of the log that everything works:
 void Start () { KinectManagerScript.OnSwipeUpDown += new KinectManagerScript.SimpleEvent(KinectManagerScript_OnSwipeUpDown); } void KinectManagerScript_OnSwipeUpDown() { Debug.Log("upDown From listener"); } 

In principle, this is the goal of our tutorial, to make our gesture some event for which we can subscribe and perform the actions that we need. For the purity of the experiment, we will add movements of our sphere in a gesture, the result will be something like the following:


Scripts:
MainSphereScript.cs
KinectManagerScript.cs

Source: https://habr.com/ru/post/276455/


All Articles