⬆️ ⬇️

We write applications for Google Glass

A few days ago I had the opportunity to thoroughly practice developing applications for Google Glass. The experience gained will be confused over time, since so far I have no plans to develop something else for “glasses”. To share with fresh impressions I decided to write this topic.

I think everyone who is interested in Google Glass knows what the program “stuffing” of this gadget represents. Yes, this is Android 4 with an adapted launcher. Yes, in the "glasses" it is quite possible to run regular android applications by installing them there via adb. You probably know about the Mirror API, which until recently was considered the only way to officially provide its service to the user of Google Glass. Below I will talk a little about the use of this tool. But the main thing I would like to tell you is how to write full-fledged android applications for Google Glass using the not yet official Glass Development Kit.



So, for starters, let's make yourself Google Glass


If you do not fall into the number of selected owners of a revolutionary gadget, do not despair. Almost real Google Glass you can make from your android-smartphone or tablet, installing launcher and several related apk on it from here . You will get a full-fledged interface with timeline cards, normally working voice command recognition, bluetooth, a working camera (it was possible to launch normally only on Nexus 7) and Hangouts to boot. With the navigation somehow did not happen, but maybe you will get better. When you first launch the launcher will request access to your account as a normal application. We give him rights and become almost a real glass explorer. At least you can send yourself timelines via the Mirror API.



Why does Goggle give the Mirror API only to Google Glass owners?


What will a normal programmer do by gaining access to a new tool? Of course, start writing code. Then test. And when there seems to be no bugs, he will publish his brainchild one way or another. This is normal everywhere, just not in Google Glass. On this platform, the user does not switch attention between the real and virtual world. Google Glass in this sense is a unique tool. A programmer who does not use "glasses" will most likely not be able to make his application fairly unobtrusive and at the same time functional, especially at first. The user experience of Glass Explorer is not fully replaced with guidelines. Probably in order to shield the still tiny community of “carriers” of Google Glass from a ton of unpleasant and intrusive Google applications and “hide” the Mirror API.



But let's say you have access. What can we do with it?

')

We publish and subscribe without warranty delivery dates


The main paradigm of the Google Glass interface is Timeline. To the right of the “home” screen with a clock and voice input is an endless ribbon of cards receding into the past. All applications using the Mirror API publish their cards there in chronological order and can subscribe to events that occur with these cards.



The user generates events using the menu elements associated with the card. The card can contain both predefined menu items, for example, “Delete” or “Share” as well as those defined by the application. The card may contain embedded cards. The scheme of the organization of such “packages” is rather primitive and does not allow making multi-level constructions. We assign the same series of cards to the same bundleId on the card that is supposed to be the “cover” and we set isBundleCover = true. In this case, the menu "cover" becomes inaccessible. The user can use it again only if he removes all the attached cards.



Cards can be located to the left of the "home" screen. These are “fixed” cards. You can try to add such a card through the Mirror API by setting the property isPinned = true, but you will most likely fail. The Mirror API will still dump your card into a shared tape. However, there is a solution: we add to the menu options with the action TOGGLE_PINNED and the user himself, if he considers it necessary, will fix your card. Updating a card will no longer affect its state - it will remain fixed until you or the user deletes it or the user makes it UnPin with the same option in the menu.



This, of course, is not all that you can do with the Mirror API. You can add the user “contact” of your application, thereby giving him the opportunity to fumble you a photo or video. Cards can include attachments. There are a bunch of features in the formation of the appearance of these cards. I will leave here only links to a couple of useful resources, where all this you can try. APIs Explorer will give you the opportunity to train in communication with the Mirror API, and the playground will allow you to “design” the cards.



But the other case is important: you will NEVER be able to make an interactive application using the Mirror API. A user can do something in your “interface”, but you cannot be sure when this event will be delivered by Google. You can show something to the user. But you can’t foresee when the user will receive your “message”. Most great application ideas are simply not fundamentally implementable using the Mirror API. It must be understood. And this must be reconciled.



How to make something interactive?


And here we are come to the rescue by the Glass Development Kit. Officially, it has already been authorized, although it has not yet been published. Google calls to use the usual Android SDK. It is possible and so, but do not forget about the very unusual properties of Google Glass in terms of “user input”. We have no buttons. There is no touch panel in the usual sense. The fact that Glass Explorer “tap” and “swap” understands only gestures. OnTouch will not catch it. We have no opportunity to intercept a long press and the gesture from top to bottom is reserved and is caught in the application as onBackPressed. Gain, oddly enough, sensors. The nod and turn of the head for this device is a worthy replacement for the buttons. With voice input, which should replace everything, while not as good as we would like. At least I have not yet managed to add my own commands and receive events when they are recognized. But maybe I didn’t try hard enough and you’ll get better.



In general, this is done somehow


We find some native application for Google Glass, for example this . We take from there glasslib.jar, which, presumably, is a semblance of what will later be published as GDK. We add it to our project and get the opportunity to manipulate timline-cards as well as through the Mirror API. Only there are two significant advantages. No delays and no restrictions. If you now make an isPinned (true) card, then it obediently becomes to the left of the “home” screen without any user intervention. We work with Timline through TimlineHelper and always from the service. The usual scheme is this: an application has only one Activity, which starts the Service at startup and ends. It also does not hurt to subscribe to the device boot event from BroadcastReceiver again to raise our service. In the Service, we check if the user has a card of our application (for this, it would be good to store it with Id in SharedPreferences) delete the old one and add a new one, again save it with Id.



import android.app.Service; import android.content.ContentResolver; import android.content.Intent; import android.content.SharedPreferences; import android.os.IBinder; import android.preference.PreferenceManager; import com.google.glass.location.GlassLocationManager; import com.google.glass.timeline.TimelineHelper; import com.google.glass.timeline.TimelineProvider; import com.google.glass.util.SettingsSecure; import com.google.googlex.glass.common.proto.MenuItem; import com.google.googlex.glass.common.proto.MenuValue; import com.google.googlex.glass.common.proto.TimelineItem; import java.util.UUID; public class GlassService extends Service { private static final String HOME_CARD = "home_card"; @Override public int onStartCommand(Intent intent, int flags, int startid){ super.onStartCommand(intent, flags, startid); GlassLocationManager.init(this); SharedPreferences preferences = PreferenceManager.getDefaultSharedPreferences(this); String homeCardId = preferences.getString(HOME_CARD, null); TimelineHelper tlHelper = new TimelineHelper(); ContentResolver cr = getContentResolver(); if(homeCardId != null){ // find and delete previous home card TimelineItem timelineItem = tlHelper.queryTimelineItem(cr, homeCardId); if (timelineItem!=null && !timelineItem.getIsDeleted()) tlHelper.deleteTimelineItem(this, timelineItem); } // create new home card String id = UUID.randomUUID().toString(); MenuItem delOption = MenuItem.newBuilder().setAction(MenuItem.Action.DELETE).build(); MenuItem customOption = MenuItem.newBuilder().addValue(MenuValue.newBuilder().setDisplayName("Custom").build()).setAction(MenuItem.Action.BROADCAST).setBroadcastAction("net.multipi.TEST_ACTION").build(); TimelineItem.Builder builder = tlHelper.createTimelineItemBuilder(this, new SettingsSecure(cr)); TimelineItem item = builder.setId(id).setText("Hello, world!").setIsPinned(true).addMenuItem(customOption).addMenuItem(delOption).build(); cr.insert(TimelineProvider.TIMELINE_URI, TimelineHelper.toContentValues(item)); preferences.edit().putString(HOME_CARD, id).commit(); return START_NOT_STICKY; } @Override public IBinder onBind(Intent intent){ return null; } } 




As can be seen above, our card is equipped with a menu of two items: Delete and Custom. And if the first is processed by the system, obediently removing the card, then the second will be thrown to us by the broadcast, which we can catch and process.

In order not to dwell on the banal "Hello, world" I made a small project . You can use it as a more advanced material to study the features of the "native" work with Google Glass. Well, of course, I am always ready to answer questions.



Of course, no one forces us to use TimeLine as an interface for your application. We can easily raise the Activity with simple controls, teach the user how to handle them ... For graphically rich applications, such as games, this will be the only way out. But, as for ordinary applications, I think they should be performed in the “native” style for this unusual platform. Then they can count on a much warmer welcome from users.

Source: https://habr.com/ru/post/188710/



All Articles