This is a small story about one of the experiments of our competence Wearables. We spent it more than a year ago, so the code is pretty outdated, and today the AndroidWear application won't surprise anyone, but the idea was very interesting, and it was great to work on it. For which many thanks to the authors of the idea: Arseny Pechenkin and VP of Engineering Roman Chernyshev, thanks to whom this R & D was launched.
The human brain has a peculiarity - when a person gets tired, his attentiveness falls. And not only from fatigue: from pressure surges, from temperature, or if 300 grams are knocked over (for someone - 1000), words will also start to be pronounced, and syllables - to communicate poorly. There are even scientific methods for establishing the level of fatigue / alcohol intoxication, used, for example, by the police, among which is to walk along the border of meters, without shaking, touch the little finger to his nose, say some fixed patter, etc.
In the old days, signs and principles of mental fatigue were described even in GOST, for example, here .
If seriously, for knowledge workers and for people making responsible decisions, adequate self-control is important, and a test to determine mental fatigue (and therefore the adequacy of reaction and decisions) is needed. There are fairly simple, but effective methods for diagnosing such conditions .
Knowing that the AndroidWear platform, which is gaining popularity, supports not only the touch-input, but also voice input, we decided to try implementing two types of tests at once:
a test puzzle for determining the coincidence of a text description with the shape / color of a geometric figure;
test for the distinctness of pronouncing a set of control phrases.
I’ll say right away that the voice input at that time distinguished arbitrary words is not so good. This is not working with the predefined set of “ok, google” commands. However, with some training, it was possible to achieve noticeable differences between the phrase, clearly said by the rested person, and the same phrase, pronounced in a hurry or in a half-sleep state, very clearly.
I repeat: the code is already more than a year old, and in it you will not find Ambient Mode, or a custom WatchFace, or some other new products that came with SDK updates for this year.
But we still learned something useful from the project.
SpeechRecognizer
First of all, this is working with android.speech.SpeechRecognizer.
SpeechRecognizer in AndroidWear at the time of writing the project had two noticeable limitations compared to Android mobile:
The number of recognition results is limited to only one value. This is an array of values ​​obtained from the Bundle results in a callback android.speech.RecognitionListener # onResults. On the one hand, this is done to improve performance, and in the hope that voice recognition will evolve and give a fairly accurate result. On the other - because of this, AndroidWear does not pass the trick with choosing one of n candidates for the recognized phrase that worked on the Android handheld. As a result - no manual adjustment of accuracy, and "full purity of the experiment" :).
There is no possibility to connect custom recognition animation. This would make a beautiful voice recognition EQ that responds to android.speech.RecognitionListener # onRmsChanged, which fits into the theme of the application instead of the standard white screen with a red Google voice input button. But without this, however, it is possible to live.
Use useful AndroidWear toolkit
As practice has shown, ButterKnife, and GreenDao ORM, and many other handy tools that facilitate and speed up development work perfectly in the AndroidWear project. Moreover, during the project we tried to create common Android components: CustomView, dialogs, and even Activities available from the common library for both projects (both mobile and wear).
Work with database
For myself, I have long made the rule to use ORM whenever possible, getting rid of a pack of samopisny classes with a lot of public static constants. And during this time, this approach has saved a lot of time and proved to be very good. Let me just say that I didn’t really like GreenDao used in the project - Sprinkles or DbFlow solve the same tasks more simply and elegantly. But this is my personal opinion, and if you are interested in looking at a living example of the use of GreenDao and the code generator building ORM classes for the project, you are welcome to the code.
Wearableconnector
Editing the phrase patterns on the clock (which do not have a keyboard) is not very convenient, so we placed the editor on the smartphone. GoogleApiClient Wearable.API is used to send templates for watches and results back to your phone. \
For the data exchange (DataItems and Messages) we created the WearableConnector class. All it provides is a single protocol (including the path for DataMapRequests and sendMessage calls).
Synchronization of voice tasks templates and settings is managed by DataItems. These are data that are supposed to be stored unchanged for quite a long time, and the DataItems synchronization mechanism provides for optimization of updating this data with a minimum payload.
The transfer of results from the clock to the smartphone is implemented using Messages.API - so, as soon as the test is completed, the result is immediately sent to the smartphone and added to the list of results already stored in the database.
As we see, there are two types of results - Speech test and Shape test.
I note that the Shape test was originally developed for a handheld and after that a part of the project was transferred as a CustomView to AndroidWear. This once again confirmed the conjecture that the implementations of many components are very well transferred from the mobile platform to wear.
The result of the experiment was an application that allows simple wrist tools (such as your AndroidWear smartwatch) to check the degree of fatigue and save some results to your smartphone. And, of course, the AndroidWear development experience, which is not only fascinating, but also fairly easy and pleasant, especially due to portability and components, and your favorite toolkit from a regular Android project to an AndroidWear project.
The first release of Languor v1.0.5 is available on Github.
Cheer to you, mobility and tirelessness! And to new meetings.