📜 ⬆️ ⬇️

Sign Language Translator: Implementing Support Vector Machines on Intel Edison

In the world there are 30 million people who have problems with speech. In order to communicate with others, they use sign language. And what if the interlocutor does not understand such a language? How to overcome the language barrier? Our story today is dedicated to the gesture recognition project. The Intel Edison board accepts information from sensors attached to a special glove, processes it using the support vector machine, finds out which letter corresponds to the gesture, and sends what it has to Android to the dubbing application.


Intel Edison and Sensor Glove: The Basis of a Sign Language Recognition System

Intel Edison became the basis of our development is not accidental. First, it has sufficient performance and RAM to implement the support vector method and process data in real time. Secondly, Edison has a built-in Bluetooth module that is used to communicate with an Android device. If you can not wait to get acquainted with the software part of the project, take a look here . In the meantime, we will talk about how our system works.

Hardware


On each of the five fingers of the glove, which supplies the initial data to the sign language recognition system, a flexible sensor is attached. The electrical resistance of the sensors depends on their bending. Thus, the more bent the finger to which the sensor corresponds, the higher the resistance.
')

Sensor whose electrical resistance depends on the bend

In particular, here we use Spectra Symbol’s 4.5 ″ unidirectional flexible sensors manufactured by Spectra Symbol. They are analog resistors that work as variable voltage dividers.

This is what the circuit diagram for a glove board in KiCad looks like.


PCB for gloves

Reading sensor readings in Intel XDK IoT Edition is performed using a library for working with flexible sensors.

var flexSensor_lib = require('jsupm_flex'); var Flex1 = new flexSensor_lib.Flex(4); 

We need information from each of the sensors in a standardized format. Since the spread of the values ​​of the source data is quite high, in this form they are difficult to interpret. Preliminary data processing is that we first find out the values ​​corresponding to the minimum and maximum bending, and then use this information in order to bring the indicators to a range of values ​​from 1.0 to 2.0. Here is how this operation, for one of the sensors, looks like in the code.

 var ScaleMin = 1.0; var ScaleMax = 2.0; var flexOneMin = 280; var flexOneMax = 400; var flex1 = (scaleDown(Flex1.value(), flexOneMin, flexOneMax)).toFixed(2); function scaleDown(flexval, flexMin, flexMax) { var new_val = (flexval - flexMin) / (flexMax - flexMin) * ((ScaleMax - ScaleMin) + ScaleMin); return new_val; } 

After preliminary data processing, we transfer them to the sign language recognition system. This is a classifier based on the support vector method.

Implementation of the support vector method


The support vector machine (SVM) is a learning algorithm with a teacher that analyzes the data used for classification and regression analysis. At the initial stage of work, a set of training examples is submitted to the system input, each of which belongs to one of the n categories. Based on these data, the learning algorithm builds a model that classifies new sets of indicators, relating them to one of the existing categories. This is a deterministic binary linear classifier. Based on the teaching examples, the algorithm finds the optimal hyperplane, which allows it to relate new examples to existing categories.

In the project, we use the node-svm library , a JavaScript implementation of one of the most popular SVM libraries, LIBSVM . To install the library, use the following command:

 npm install node-svm 

Then we copy the library folder to the project directory. In addition, before using the node-svm library, you need to install some additional npm packages, on which this library depends:


To install packages, use the following command:

 npm install <package name> 

After everything is installed, we can create a classifier and configure the kernel parameters:

 var clf = new svm.CSVC({ gamma: 0.25, c: 1, normalize: false, reduce: false, kFold: 2 //     k  }); 

Parameter C controls the relationship between SVM errors on training data and maximizing the width of the border between classes. This parameter is used at the training stage of the model and indicates how much the outliers will be taken into account when calculating the reference vectors. The best values ​​for the parameters C and gamma are determined using a grid search. Here we do not perform the reduction of the dimensionality of the data, since each of the values ​​(measurements) coming from the sensors is important in the classification of gestures.

The next step of our work is to build a model: in training the classifier and creating a report. The training takes a few seconds.

 svm.read(fileName) .then(function (dataset) { return clf.train(dataset) .progress(function (progress) { console.log('training progress: %d%', Math.round(progress*100)); }); }) .spread(function (model, report) { console.log('SVM trained. \nReport:\n%s', so(report)); }).done(function () { console.log('Training Complete.'); }); 

The classifier is then used to analyze real-time gestures. A one-dimensional array is fed to the input of the system; at the output, we obtain the prediction that the gesture belongs to one or another group. This code snippet shows how we pass sensor readings as parameters to the classifier:

 prediction = clf.predictSync([flex1, flex2, flex3, flex4, flex5]); 

In addition, based on the source data, you can get the probability for each class using the following command:

 probability= clf.predictProbabilitiesSync ([flex1, flex2, flex3, flex4, flex5]); 

The character received during the classification is transmitted to the Android device each time a program running on Edison receives a request to read data.

Creating a file with training data


The training.ds file contains 832 lines with training data. Manually working with this amount of information is inconvenient, so we used the code below to distribute the examples into classes, that is, to assign letters of the alphabet to gestures.

It is in the logtrainingdata.js file:

 var data = "X" + " " + "1:" + f1ex1 + " " + "2:" + flex2 + " " + "3:" + flex3 + " " + "4:" + flex4 + " " + "5:" + flex5 + "\n"; //X    ,     .     . : A=0, B=1,C=2… //       fs.appendFile('training.ds', data, function(err) { if (err) { console.log(err) } }); 


A fragment of the data file for the system learning

Edison preparation and program launch


Before an Android device can communicate with an application running on Edison, Bluetooth must be enabled on the board. This is done like this:

 rfkill unblock bluetooth killall bluetoothd hciconfig hci0 up 

You can check if the Bluetooth module is working by typing:

 hcitool dev 

If everything goes as it should, the MAC address of the Edison Bluetooth adapter will be displayed in response.
Run the main program:

 node main.js 

And now let's take a look at the part of the project that runs on Android.

Android application for dubbing recognized gestures


The Android application used in our project uses the system's ability to convert text to speech, and thus voices the recognized gestures. The application allows the user to customize the language, speed and tone of speech, as well as test the settings.


Application for dubbing recognized gestures

The main button on the application screen is Scan. It serves to search for the Intel Edison board and to connect to it. After connecting, the Android application accepts the data recognized by the reference vector algorithm, displays and pronounces the letter corresponding to the gesture. This is how it all looks.



Conclusion


We talked about how, using Intel Edison, affordable software, flexible sensors and Android-smartphone, build a system that can help those who use sign language, to expand the boundaries of communication. As you can see, on the basis of universal components, you can very quickly create a prototype of a completely new IoT device. In perspective, this is one of those “things” that can make the world a better place.

Source: https://habr.com/ru/post/306948/


All Articles