Recently, a wonderful flash-game Machinarium won the 1st place in the rating of paid games for iPad. Nevertheless, many talented developers of flash-games are cautiously looking towards mobile platforms. In Russian, information on the topic is extremely small. I hope this article will improve things a bit. I wish you a pleasant reading.
Initially, Adobe AIR was conceived as a platform for desktop applications. However, now she supports the development of stand-alone applications for mobile devices, desktops, and home digital devices. This wide reach makes it an attractive development platform. At the same time, each environment has its own unique specifics that must be considered.
For example, mobile applications most often run for short periods of time. The user interface of such applications should be convenient for use on small screens, scaled to display correctly on tablets and support different device orientations. It should work on touchscreens and use software and hardware features unique to a particular class of devices. In addition, you must consider the requirements for memory and graphics.
')
This article describes the capabilities provided by AIR to developers of applications for smartphones and tablets running iOS, Android, and Blackbery Tablet OS.
Screens
Perhaps the first and most obvious subject of consideration is the screen of a mobile device. It is relatively small, both in terms of physical dimensions and in terms of the number of pixels displayed, and is characterized by high density (in pixels per inch). Different devices have different combinations of pixel density and physical sizes. In addition, the device itself can be held in both portrait and landscape orientation.
To work with all this variety of sizes and densities, AIR provides the following key APIs:
Stage.stageWidth, Stage.stageHeight
: These two properties store the actual screen size. Moreover, their value may change during execution, for example, when the application enters / exits full-screen mode or when the orientation of the device changes (about the orientation just below)Capabilities.screenDPI
: This property stores the number of pixels per inch.
Using these values, the application can adapt to screens with different characteristics.
Note: If you have previously developed desktop applications on AIR, pay attention to the fact that when developing for mobile platforms, Stage is used, and NativeWindow is not used. Those. Of course you can instantiate a link to it, but it will not have any effect. This feature is left so that you can write code that works simultaneously on mobile devices and on desktops. To find out if NativeWindow is available, use
NativeWindow.isSuppoted
.
Mobile applications are not required to support screen rotation, but you must remember that not all mobile devices have a portrait orientation (height greater than width) used by default. Applications that are not interested in the ability to automatically rotate the screen can opt out by setting it to flase in the application descriptor. Conversely, you can set to true and listen to
StageOrientationEvents
events at the Stage.
Touch input
From the moment the application is drawn on the screen, it is usually ready to accept user input. For a mobile application, this means input via touchscreen.
AIR automatically matches simple one-finger gestures to mouse events, such as pressing one button with a finger (tap). This allows you to write code that works without equal changes on mobile devices as well as on desktop computers.
For more complex multi-touch interaction, AIR includes the following APIs:
Multitouch
: provides information on what types of touch interactions are available.TouchEvent
: Events received by the application during touch processingGestureEvent, PressAndTapGestureEvent, TransformGestureEvent
: The application receives these events when handling gestures
For applications that work with standard system gestures (for example, pinching and spreading fingers for scaling), set
Multitouch.inputMode
to
MultitouchInputMode.GESTURE
. Single touches will be converted into corresponding gestures by the system automatically. For example, for a decrease / increase gesture, TransformGestureEvent events with the
TransformGestureEvent.GESTURE_ZOOM
type will occur.
If the application wants to receive raw information about touches, set
Multitouch.inputMode
to
MultitouchInputMode.TOUCH_POINT
. The system will generate a series of touch start, move and end touch events, and the screen can be touched at several points simultaneously. The task of interpreting such multiple touches falls on the application.
Text input
Mobile devices with a software keyboard (ie, displayed on the screen, without physical keys) require special attention. Although such a keyboard is not everywhere present, devices using it are beginning to dominate, and you need to make sure that everything works well.
When displayed, the software keyboard closes part of the screen. To accommodate this, AIR shifts the Stage so that both the input field and the keyboard are visible at the same time, hiding the top of the application outside the screen. This behavior can be disabled and implement your own. To do this, set the softKeyboardBehavior parameter in the application descriptor to none (the default value is pan).
The Stage.softKeyboardRect field stores the size of the area occupied by the software keyboard. Listen to the SoftKeyboardEvent event to know when this value changes.
The application, as a rule, does not have to worry about opening the software keyboard; this happens by itself when the text field receives focus. However, you can request automatic opening of the keyboard for any InteractiveObject when it receives focus (the InteractiveObject.needsSoftKeyboard property) or open it yourself by calling InteractiveObject.requestSoftKeyboard (). These APIs will not have any effect on devices that do not support the software keyboard.
Sensors
User interaction with a mobile device is not limited to one multi-touch screen. Applications can use information about the location of the user, as well as respond to the orientation and movement of the device in space. AIR supports these features through the following key APIs:
- Geolocation: Handles events related to the geographical location of the device (latitude and longitude), as well as its movement (direction and speed)
- Accelerometer: Handles events indicating the forces applied to a device in x, y, and z axes
For some applications, the possibility of geolocation is an integral part (for example, an application that determines the location of the nearest ATM). In other applications, geolocation can be used for convenience (for example, a voice note with information on where it is recorded).
As noted earlier, an accelerometer can be useful when you want to know the actual position of a device in space, and not just a logical orientation. The accelerometer allows you to turn your device by itself into an input tool like a joystick. Many applications use this feature.
Both of the described APIs make it possible to set a sensor polling interval the frequency with which relevant information will be sent to the relevant listeners. It should be clarified, however, that the set frequency is not guaranteed, and may depend on various factors, such as, for example, the hardware characteristics of the device.
Display web content
The modern runtime environment should contain support for HTML content. In the AIR for this is the StageWebView API. StageWebView provides AIR applications with access to the device’s built-in HTML display capabilities. It should be noted that since StageWebView uses the HTML engine of the platform itself, the same display on different platforms cannot be guaranteed.
Another consequence of using native tools is that the StageWebView content is not integrated with the application's display list. However, this content can be saved as a bitmap using drawViewPortToBitmapData (), and then placed on the display list. This can be used, for example, to create a screenshot of the page or create smooth transition effects.
For those familiar with the HTMLLoader API in AIR, it's worth noting that StageWebView is not a replacement. HTMLLoader contains built-in HTML support and allows HTML and JavaScript to be placed outside the browser’s sandbox. StageWebView works with HTML and JavaSctipt content within the traditional browser sandbox.
If you want to send a user from your application to a browser or applications like YouTube or Google Maps, use navigateToURL ().
Images
When it comes to taking photographs, there is no question “Does the device know how to photograph,” but there is another: “How many cameras are in this device?”. AIR for mobile devices includes APIs that allow you to work directly with the camera or with already received photos.
Classes CameraUI and CameraRoll
The built-in camera functionality is available through the CameraUI class. As the name suggests, the class differs from the similar Camera class in that it works with the camera user interface and not directly with it. Depending on the device, this can be expressed in the user's ability to choose between photo and video, specify the resolution of images, turn on / off the flash, switch between the front and rear cameras, etc.
However, mobile devices not only take pictures, but also store them. A library of user-made photos is available through the CameraRoll class. The browseForImage () method can be used to open the standard image selection dialog from the library. In addition, you can write your data to this area using the addBitmapData () method.
Class MediaPromise
CameraUI and CameraRoll return media content using a MediaEvent event. There is nothing unusual in it, with the exception of the data field, which has a MediaPromise type, through which relevant information is available.
As the name suggests, MediaPromise promises to provide content, but does not necessarily keep it in itself. This is an important nuance, so you should take a few minutes to understand it.
Whether to place a media file in memory or work with it directly from a storage device depends on several factors. For example, a video usually has a large size and it is not practical to put it entirely in memory, because her as usual is not very much. On the other hand, the photo just taken is already in memory, it is not as large as the video and you can work with it directly from memory.
The MediaPromise class is a wrapper designed to simplify work with this uncertainty by providing access to a media object through a single interface. If an application wants to work with an object directly from a storage device, you can check its presence there through MediaPromise.file (must be non-null).
If an application wishes to process a media object in memory, it can always be read through the stream accessible via MediaPromise.open (). MediaPromise automatically returns the contents of the media object, no matter where it is located - in memory or on a storage device. When using open (), be sure to check MediaPromise.isAsync to determine the type of stream.
Finally, if you need to directly place the media file on the display list, avoiding unnecessary copying in the code, the Loader class has the Loader.loadFilePromise () method. As the name implies, it works with any FilePromise (MediaPromise implements the IFilePromise interface).
Application life cycle
On mobile devices, applications have little ability to manage their life cycle. They can not be started by themselves, but only with the direct participation of the user (for example, from the home screen or through the URL associated with the application). At any time, an application can be sent to the background, and then stopped altogether (usually this happens when the current active application does not have enough resources).
On some mobile platforms, NativeApplication.exit () is not applicable (inoperable, “no-op”). Instead of maintaining its state on exit, applications should do this when switching to the background and / or at certain intervals.
The application receives the DEACTIVATE and ACTIVATE events, respectively, when it goes into suspend mode and returns from it. In addition to this, AIR additionally takes some specific actions, the details depend on the platform.
Android background behavior
On the android, applications tend to do as little work as possible in the background, but there are no hard limitations. When an AIR application goes into the background, the frame rate is reduced to 4 fps, and although event processing continues, the rendering phase in the event loop is dropped.
Thus, an AIR application on Android can perform tasks in the background such as downloading / downloading files, synchronizing information. However, when switching to the background mode, the application should independently take additional steps to reduce the frame rate, disable unnecessary timers, etc.
IOS background behavior
In iOS, apps don't have real multitasking. Applications should indicate that they intend to show some activity in the background (for example, to hold a voip call or download a file)
AIR does not support such a work model, so when an application goes into the background, it just pauses. This means that the frame rate is set to 0 fps, events are not processed, rendering does not occur. The state of the application is nevertheless stored in memory and restored upon exiting the background.
Performance
To achieve good performance of your application, you need to carefully consider all its aspects.
Start time
Reducing startup time is not an easy task, because a run typically affects multiple sides of an application. First of all, focus on reducing the amount of executed code, rather than on its acceleration.
For example, you are writing a game and you want to display on the first screen a table of points stored locally. Execution of this code can be surprisingly expensive. Since the code will be executed for the first time, the costs associated with its interpretation / compilation will arise, and execution will take a little slower than the ActionScript code in steady-state usually works. Secondly, the cost of reading information from the file system. Finally, the cost of positioning and displaying information on the screen.
Instead, consider doing all this work after the first screen has been displayed. And then, while the user is busy looking at all the merits of your art, you can prepare a table of points and display it with animation.
In general, the approach to optimization is as follows: first of all, think carefully about what code and when to execute, instead of fighting over speed optimization. The subjective performance felt by the user is important: users notice delays when they are forced to wait for the completion of a process.
Rendering
The use of graphics processors (GPU) has changed the approach to optimizing rendering. When rendering using the CPU, working with large volumes of pixels is wasteful, it is more expedient to use a mathematical description of the forms of objects and their transformation, and only after that draw. This is the traditional approach that AIR uses to work with vector content.
GPU, on the contrary, using pipeline processing, can extremely quickly operate with large arrays of pixels, but with vector images, things are much worse.
AIR allows you to take the best of both approaches. You can use all the features of flash to work with vector images, and then, by caching the result as a set of bitmaps, effectively draw it on the screen. Use BitmapData.draw () for this purpose.
Note also that it remains possible to use pre-prepared bitmap images instead of getting them from a vector shape on the fly. However, it is not as effective due to the variety of sizes and densities of mobile device screens.
Memory
Although modern mobile devices have a decent amount of RAM, it is important to remember that they use a slightly different memory management model than traditional desktop operating systems. On desktops, if the memory runs out, some of the data may be temporarily uploaded to disk and then returned back to memory. This allows the operating system to keep almost an unlimited number of programs running at the same time.
On mobile devices, this approach is usually not applicable. If the requested memory size exceeds the currently available, the background applications are forced to terminate, thus freeing up the memory. If the request for memory allocation still cannot be satisfied, then the requesting application is forced to stop its work.
This entails the following consequences. First, you need to have a clear idea of ​​the memory your application is occupying. Secondly, to increase the chances of remaining running, the application should try to occupy as little memory as possible while in the background.
Both of these tasks can be achieved through explicit memory management from the application. At first it may seem strange, because for these purposes there is a garbage collector. However, it is better to think of it as a mechanism that only cleans the trash can, but you must send the garbage to this trash yourself.
The first step towards explicit memory management is to make sure that you clear all object references if it is no longer needed. Here is an example: Suppose that we have an application that reads an XML file with settings when it starts up. After the settings are read and applied, a reference is stored in memory to the root of the tree and, accordingly, the associated XML tree, which we no longer need. In this case, you need to explicitly assign the null value to a link to the root of the tree, thus allowing the garbage collector to free up memory.
Explicit memory management is also important when working with multiple objects of the same type. For example, an application that loads a set of images, if it is written illiterately, and if this set is large enough, it will definitely face a lack of memory. Conversely, if you programmatically limit the maximum number of images residing in memory at the same time, memory problems can be avoided. In our example, you can delete an old image before uploading a new one, or cyclically overwrite a fixed number of images in memory.
Storage
Mobile devices provide access to a local file system where an application can store its settings, documents, etc. As a rule, the application should rely on the fact that data is available only to him, and there is no possibility to share this data with other applications.
On all devices, storage is available through the File.applicationStorageDirectory property.
Addroid, in addition, provides an additional opportunity to use the file system on the SD card, accessible via the path “/ sdcard”. Unlike the main storage, it is available for reading and writing to all applications on the device. It should however be remembered that the SD card can be removed from the device or be inserted but not mounted.
Given the prevalence of cameras on mobile devices, do not forget that there is still a special general storage for photos. It can be accessed through the CameraRoll API, mentioned earlier in the “Images” section. And although on some platforms, access to photos can be obtained directly through the file system, this has a bad effect on the portability of the application and is a bad practice.
Deployment
In the world of mobile devices, application distribution occurs mainly through specialty stores (app marketplaces). These stores include special features on devices for searching, installing and updating applications.
For placement in a specific store, the AIR application must be converted to a special format specific to each platform. For example, for placement on the Apple App Store, this is an .ipa-file, and for the Android marketplace - .apk. This can be done directly from Flash Builder or, for example, using the ADT utility.
All mobile app stores require that the app be signed. For iOS, certificates are issued by Apple . For Android devices, the developers themselves create certificates, valid for at least 25 years, and with which they must sign each of their applications and updates to it. Thus, if you publish your application on different sites, you need to operate with several certificates.
When you prepare an application for placement in the Android market, remember that AIR is installed there separately (on iOS, an AIR copy comes as part of each application). If your application runs on a device with no AIR installed, the user will be redirected to install it (see also the -airDownloadURL flag with ADT).
What's next?
Development using Adobe AIR allows you to create a single mobile application that can be deployed on many mobile devices, including smartphones and tablets running Android, iOS or BlackBerry Tablet OS.
This is achieved by using cross-platform abstractions (for example, when accessing photos), providing information about the dynamic properties of devices (for example, screen size). At the same time, AIR does not interfere when assistance is not required (for example, when using the file system API).
Developing cross-platform mobile applications requires the programmer to be knowledgeable about effective memory usage and application life cycle. By combining this knowledge with the AIR runtime, you can quickly create cross-platform applications.
-
useful links
Blogs
Development of mobile games on flash + AIR
Books
High Performance Flash: Performance tuning for Flash, Flex, AIR and Mobile applications , E. Elrom, 2012 - Pre-order on Amazon
Developing Android Applications with Adobe AIR , V. Brossier, O'Reilly, 2011
Adobe AIR , J. Lott, 2009 - The first Russian-language book on Adobe AIR
Adobe AIR Bible , B. Gorton, Wiley, 2008
Forums
Adobe AIR Mobile Development Forum
Articles
Article “What's new in AIR 3?”
Flex / AIR for iOS Development Process Explained!
Creating an iOS application using Flash CS5.5 + AIR 2.7
miscellanea
AIR Native Extentions (ANE):
Adobe AIR
Adobe AIR
Adobe AIR Developer Center
Adobe Mobile and Devices Developer Center
Adobe AIR Cookbook
TourDeFlex — flash-.