
By the summer I got an invite to google sandbox. But there were a lot of people in this sandbox, all the waves were public, and my poor netbook only digested all this activity with a big creak, so, after playing a little, I scored on a sandbox :)
And just recently my account in the sendbox turned into a live account, and I, having sent invites to those I got to, and waited until at least one of my acquaintances received them, sat down to deal with the robot api.
The result of the proceedings was such a basic robot: bakarobo@appsot.com, which is able to do nothing yet:
')
on command! br: bor! get a random quote from bash
on the team! br: rb! get a photo of the day with rosbest
on team! br: BakaRobo! respond :)
and swear in response to all unfamiliar commands.And in the process of creation, I realized a funny thing: a large, cool apishka was developed for wave robots ... Virtually not documented at the moment :) At least, the reference for Python api is just a generic list of classes and functions, from which practically nothing is understood .
And so, having spent some time reading different docks and samples, I, it seems to me, singled out some basic set of information necessary to make some kind of useful robot already. I want to tell about all these necessary things, maybe not very well structured :)
Let's start with the fact that robots should be placed on
Google App Engine . I will not tell you how to create an application there and download the toolkit for commits of the code - everything is very clearly explained there.
So, we downloaded the tools, and in some folder on the disk, we had something like this:
.
..
google_appengine
our_robot
Where our_robot is the folder in which our robot will be located. And in this folder we download and unpack
this archiver from code.google.com - this is, in fact, a Python apish.
Now we are ready for the actual development.
Just in case: the commit code in the appengin is done like this:
python ./google_appengine/appcfg.py update ./our_robot/ - then we are asked about the password and password and give to fill in the file.In the base case, there will be three main files in the project:
our_robot.py - actually, the robot code
app.yaml - something like a manifest
_wave / capabilities.xml - a file announcing events that the robot wants to listen to.
Addition from
farcaller :
Python api xml-ku castati itself generates, on the basis of the arguments to robot.Robot, but you need to write to jaw pen with pens .
So, apparently, from the number of gestures in the development process can be waived.
The list of events can be found here , but the most important ones for the robot, in my opinion, are:
WAVELET_SELF_ADDED - triggered when the robot is added to the wave, at this moment it is good to show a little info on the use;
BLIP_SUBMITTED - triggered when a wave blip is created / edited, and not at the time of writing the text, but when the “Done” button is pressed.
Let's go further.
The manifest app.yaml looks, judging by the tutorial on code.google.com, like this:
application: our_robot
version: 1
runtime: python
api_version: 1
handlers:
- url: /_wave/.*
script: our_robot.py
- url: /assets
static_dir: assets
- url: / icon.png
static_files: icon.png
upload: icon.png
Here, it seems, everything is clear. The name of the robot, the version, what we launch, the version of api and handlers for different urls.
The only thing you should pay attention to is "-url: /icon.png" in the handlers section. This, it seems, is not present in the tutorial, the design allows you to specify the way to handle the robot icon. We draw it in the png, save it to the folder of the robot, declare it inside the Python file :)
capabilities.xml, again, on the tutorial, also looks straightforward:
<? xml version ="1.0" encoding ="utf-8" ? >
< w:robot xmlns:w ="http://wave.google.com/extensions/robots/1.0" >
< w:capabilities >
< w:capability name ="WAVELET_SELF_ADDED" content ="true" />
< w:capability name ="BLIP_SUBMITTED" content ="true" />
</ w:capabilities >
< w:version > 1 </ w:version >
</ w:robot >
* This source code was highlighted with Source Code Highlighter .
Actually, there’s really nothing to change in this file: only the version number and the events that we want to listen to.
But after all this preliminary confusion has ended and, actually, quite pleasant fuss with writing of the Python code of the robot begins.
To begin, I will describe the general structure of the code, as it is given in the examples and tutorial, and then distribute all sorts of minor utilities that are not in the tutorial, I still have to get to the reference in the reference, so I had to extract them from the examples.
So, in general, the code for a robot blank looks like this:
from waveapi import events
from waveapi import model
from waveapi import robot
def OnRobotAdded(properties, context):
pass
def OnBlipSubmitted(properties, context):
pass
if __name__ == '__main__' :
myRobot = robot.Robot( 'our_robot' ,
image_url= 'http://our_robot.appspot.com/icon.png' , #
version= '2.3' , #
profile_url= 'http://our_robot.appspot.com/' ) #
# :
myRobot.RegisterHandler(events.WAVELET_SELF_ADDED, OnRobotAdded)
myRobot.RegisterHandler(events.BLIP_SUBMITTED, OnBlipSubmitted)
#
myRobot.Run()
* This source code was highlighted with Source Code Highlighter .
And it seems that everything is wonderful and clear. But when you start writing the actual functions of events, you understand that it is completely unclear how, for example, to replace a piece of text with another piece of text, not to mention something to paint or emphasize.
As a result of a not too long, but rather stubborn examination, I dug up a list of useful methods, which was enough for me to write a robot:
First, in order to get the blip in the event-handling functions, with which the event occurred (if, of course, this event happened with the blip), we use
blip = context.GetBlipById (properties ['blipId'])Secondly, to get the text of the blip and to operate with it, we do
doc = blip.GetDocument ()
contents = doc.GetText ()Accordingly, in order to replace some piece of text with another, use the resulting doc
doc.SetTextInRange (model.document.Range (BEGINNING, END), NEW_TEXT)To paste a piece of text anywhere:
doc.InsertText (BEGIN, TEXT)To add a piece of text to the end:
doc.AppendText (TEXT)To insert a picture:
At the end -
doc.AppendElement (model.document.Image (ADDRESS_KARTINKI, WIDTH, HEIGHT, ATTACHMENT_ID, ALT))In a certain place -
doc.InsertElement (BEGINNING, model.document.Image (ADDRESS_KARTINKI, WIDTH, HEIGHT, ATTACHMENT_ID, ALT))In general, it is useful to look at
this reference in order to find out what can be done with the document. In order to find out the types of elements that can be created, look at the references to the waveapi.document. * - there is both Image, Link and even Gadget.
Farther. All design and various other utility blip stored in the so-called annotations. It's simple with her:
doc.SetAnnotation (model.document.Range (BEGINNING, END), TYPE, VALUE)Moreover, TYPE is the thing that describes what we add for the annotation. The most important, IMHO, is the 'style / STYLE_PROP', where STYLE_PROP is the css attribute entry in the js form.
Suddenly, someone who does not know is a transformed css properties entry used in js scripts, it’s easier to show its essence with examples :) For example, color is just color, but font-size is fontSize. In the sense, where there is a hyphen in css, there is no hyphen in this record, but every word except the first one begins with a capital letter. backgroundColor, backgroundImage, marginTop, and so on.They are cleaned just as plainly, you can stupidly kill all the annotations of the same type, for example, about the font color, or the background color, like this:
doc.DeleteAnnotationsByName (TYPE)And you can only clear a certain range of text from annotations of a certain type:
doc.DeleteAnnotationsInRange (model.document.Range (BEGINNING, END), TYPE)Annotations are also useful because you can store any info that relates to this blip in them.
To annotate the whole blot, use:
doc.AnnotateDocument (TYPE, VALUE)To find out if there is any type of annotation in the blip, call
doc.HasAnnotation (TYPE)Somewhere around such a set, it seems to me, is already allowing to create robots that can do something useful. Of course, beyond the scope of the text, for example, the use of the GAE database system and other pleasant things remain, but I hope the text will not be completely useless.
PS: By the way, I noticed a bug that greatly prevented the development at first: in the logs tab in the appspot (the only debug tool available to us), the level of messages displayed by default is set in Error. Hochma is that if a log of a level first goes to a log entry, for example, Info, and only then a piece with an error, such a record will not be shown to us. So - we rearrange the alert level on Debug and we are happy to have the opportunity to review all the errors that occurred.
PPS: Thank you, moved to the appropriate blog.