πŸ“œ ⬆️ ⬇️

Windows 7 Sensor and Location platform

One of the components of Windows 7 is the Sensor and Location platform . Sensor and Location is part of Windows 7, which allows you to organize work with various sensors and additional devices to measure something.

Why do you need it? Sensors are needed in order to simplify some trivial actions and save us from unnecessary worries in work. This is especially true for laptop owners whose life is very dynamic. Imagine that a lighting sensor is built into your computer, which is accessible to all applications and enables these applications to adjust their picture depending on the lighting. Another example would be a GPS position sensor. In this case, applications can adapt to the area where you are currently located. For example, applications can display weather information specifically for the city where you are. In fact, examples can be given a large number, it all depends on fantasy and specific cases. Applications that change their behavior depending on external conditions are allocated in a separate class of applications and are called context-sensitive applications .


')
The question may arise - β€œand what, in fact, has changed?”, β€œWhy it could not have been done before?”. The answer is simple - previously these scenarios could also be implemented. However, this was not so easy. In fact, working with external sensors was reduced to the exchange of information through the COM port, and each sensor had its own specific API. For this reason, it was very difficult to organize some kind of universal software interface with which you could work simultaneously from several applications and this process would be transparent.

This is exactly the problem that the Sensor and Location library solves. It can be used to access various sensors and receive information from them in the same for all styles. It is important that this problem is solved at the level of the operating system. Such a move could give a new impetus to the development of context-sensitive applications. The following is a diagram showing the structure of objects for working with sensors. Next we look at this in more detail.



To connect a sensor to the Sensor and Location platform in Windows 7, you need to implement a driver and simple wrapper classes on .NET for working with this sensor.

Of course, in the near future, end users are unlikely to be able to fully experience the power of this entire platform. This will take some time for the hardware developers to develop and integrate their sensors into the hardware platforms. However, we, the developers, can begin to prepare for this today. Therefore, I plan to talk about how to work with the Sensor and Location platform in the context of our business applications.

In order to conduct experiments not with virtual sensors, but with something more or less close to reality, we will use a device from Freescale semiconductor, built on the basis of the JMBADGE2008-B microcontroller. This device is a small board that also has several sensors β€” an accelerometer, a light sensor, and buttons.



This device is designed specifically to demonstrate the capabilities of the Sensor and Location platform in Windows 7. In fact, anyone can buy it. Thus, this device is well suited to demonstrate this feature of Windows 7.

Before we look at specific applications, let's take a look at how the Sensor and Location platform works. Before the advent of Windows 7 and the Sensor & Location platform, connecting various sensors came down to implementing a driver and software for it.



With such an organization, the task of interacting with external sensors is possible, but difficult. To do this, each application must interact with the API that the developer of the sensor and the software that serves the sensor offer. The problem is particularly acute if the application must use a variety of sensors of the same type from different manufacturers. How does the Sensor & Location platform offer to solve this problem?

At the level of the operating system there are mechanisms for working with sensors. There is a standard unified software interface for working with sensors - Sensor API. In this case, all interactions with the sensor occur exactly through the Sensor API. It is important that the interaction with all sensors occurs in the same style. Now you do not need to integrate with the native API via p / invoke.



In order to work with the Sensor and Location API, you need to download the appropriate .NET Interop Sample Library. There are .NET wrappers for working with the Sensor API. In it there are several classes with which you can work with sensors.

The SensorManager class is the entry point. Through it you can get information about the sensors, as well as work with them. For example, using the GetSensorBySensorId <> method, you can access the sensor that interests us. Each sensor must have a wrapper class that inherits from the base Sensor class. There are already three such implementations in the .NET Interop Sample Library - AmbientLightSensor, Accelerometer3D, UnknownSensor.



The main idea when working with sensors is as follows. When the sensor state changes (connected / disconnected / active / etc), a StateChanged event is generated. This event is necessary to start or stop working with sensors. After communication with the sensor has been established, a DataReportChanged event is generated when new data is received. How often this event will be generated depends on the implementation of the sensor and the driver to it. When processing this event, you can read the status of the sensors and somehow change the operation of the application. For these purposes, the GetProperty method is used. In the parameters of this method, the property identifier is transmitted, which must be read from the sensor. As a rule, the details of calls to this method are hidden in the classes that are implemented for a specific sensor.

In addition, each sensor has its own identifier (GUID), which can be used to identify the device. When implementing a wrapper class for a sensor, this ID is indicated by an attribute. Thus, access to the sensor can be obtained either by explicitly specifying the identifier of this sensor, or by referring to this wrapper class.

/// /// Represents a generic ambient light sensor
///
[SensorDescription ("97F115C8-599A-4153-8894-D2D12899918A")]
public class AmbientLightSensor: Sensor
{
// ...
// ...
// ...
var sensors = SensorManager.GetSensorsByTypeId <AmbientLightSensor> ();


Let's try to implement a few examples of working with sensors that are available in the device from Freescale. We will work with two types of sensors - an accelerometer (allows you to measure the angle of the device) and a light sensor (measures the level of illumination in the room).

The first application that we implement will display the level of illumination in the form of a burning light bulb on a form. First, let's subscribe to a state change event in the Sensor API. This is necessary in order for the application to start working if the sensor is connected on the go. In the event handler for this event, we get a list of all the sensors of the desired type and subscribe to the DataReportChanged event. In the event handler for this event, we will read the value from the light sensor and write it in the TextBox on the form. Because the event is generated in the additional thread, you will also need to make a call to the Dispatcher.Invoke method, so that processing goes in the main thread and we can interact with elements on the form. Thus we will receive the following code.

private void Window_Loaded (object sender, RoutedEventArgs e)
{
SensorManager.SensorsChanged + = SensorManagerSensorsChanged;
}
void SensorManagerSensorsChanged (SensorsChangedEventArgs change)
{
Dispatcher.Invoke ((System.Threading.ThreadStart) (UpdateSensorsList));
}
private void UpdateSensorsList ()
{
var sensors = SensorManager.GetSensorsByTypeId <AmbientLightSensor> ();
foreach (var sensor in sensors)
sensor.DataReportChanged + = delegate (Sensor sender, EventArgs e)
{
Dispatcher.Invoke ((System.Threading.ThreadStart) (delegate
{
if (ActiveSensorsListBox.SelectedItem == sender)
{
CurrentValue.Text =
((AmbientLightSensor) sender). CurrentLuminousIntensity.Intensity.ToString ();
}
}));
};
}


Now in TextBox on the form displays the current value of light. Now it is not difficult to implement some kind of visualization for this. With the help of bindings in WPF we will display the degree of illumination in the form of light bulbs. As a result, we get the following application.



Since the photo is very difficult to judge the operation of the application, I recorded a short video in which you can clearly see how the sensor responds to the degree of illumination.

Demonstration >>

Another sensor is more interesting - it allows you to determine the degree of inclination of the device along different axes. To demonstrate the degree of inclination, we will take a three-dimensional model of the aircraft for a WPF application and will rotate it in space depending on the sensor parameters. The principle of this application is similar to the previous one - we find the necessary sensors, subscribe to events and write down coordinates in the input fields on the form when processing them. After that, we tie the coordinates of the model to the values ​​of these input fields.

private void UpdateSensorsList ()
{
foreach (var sensor in SensorManager.GetSensorsByTypeId <Accelerometer3D> ())
{
sensor.DataReportChanged + = delegate (Sensor sender, EventArgs e)
{
Dispatcher.Invoke ((System.Threading.ThreadStart) (delegate
{
if (UseXCoordinate.IsChecked == true)
CurrentXValue.Text = ((Accelerometer3D) sender) .CurrentAcceleration [Accelerometer3D.AccelerationAxis.X] .ToString ();
if (UseYCoordinate.IsChecked == true) CurrentYValue.Text = ((Accelerometer3D) sender) .CurrentAcceleration [Accelerometer3D.AccelerationAxis.Y] .ToString ();
if (UseZCoordinate.IsChecked == true) CurrentZValue.Text = ((Accelerometer3D) sender) .CurrentAcceleration [Accelerometer3D.AccelerationAxis.Z] .ToString ();
}));
};
}
}


As can be seen from this example, the code for working with sensors has not changed very much. In fact, only the code for receiving data from the sensors changed, and the rest remained unchanged.





As can be seen from the photo when tilting the device, the sensor transmits information to the application and the coordinates of the model change. Thus, we can see the effect of the tilt of the three-dimensional model.

Demonstration >>

Interestingly, these sensors can use multiple applications at the same time. Also in the same application you can use multiple sensors. Let's combine the application with the rotation of the three-dimensional model with a light sensor. In this case, in addition to turning the model, we will show the sun. If the illumination in the room decreases, then the sun will disappear. The more lighting in the room, the more intense the sun will shine. Accordingly, this application uses the code from the two previous examples. Therefore, I will not give the code, but immediately show the result.





You can also see this application in dynamics.

Demonstration >>

These examples clearly show that working with sensors in Windows 7 is very simple. However, for this you need to have a driver for Windows 7 and a wrapper class for the Sensor & Location platform. As a rule, drivers are supplied by the manufacturer of the hardware platform, but the wrapper class can be implemented independently.

As I said before, the entry point is the SensorManager class. Using this class, you can access the necessary sensors and work with them. This class has such methods as getting a list of all sensors, getting a sensor by ID or by type, a request to use a sensor, as well as an event for changing the number of sensors in the system.



Each sensor has two main types of identifiers - SensorId and TypeId. TypeId identifies a particular device class. For example, it is possible to get all the light sensors in the system, or some other types of devices. SensorId is assigned uniquely to each device. For example, if there are three motion sensors of the same type in the system, each will have a unique identifier. There is also CategoryId, which combines sensors into categories.

Each identifier is a GUID. They are set by manufacturers when developing a device and drivers. Thus, you can get a specific sensor only by knowing its ID. Each sensor is represented by a Sensor class. It has general information about the sensor and methods that allow obtaining data from generalized collections in untyped form. It is clear that such a presentation of data is not very convenient for our applications. Therefore, it is common for each sensor to implement a wrapper class within the framework of the Sensor API. It is implemented by inheriting from the general class Sensor. In the demo examples there are already two such implementations - for the accelerometer and for the light sensor. However, in the device that we considered earlier there are also touch buttons that can also be used. So let's implement such a class for this sensor.

We will define a new class that will be derived from the Sensor class. In order for it to be recognized in the Sensor API, it must be marked with the SensorDescription attribute, in which you specify the TypeId for this type of sensor. In the base Sensor class, there are two important things for us β€” the DataReport property and the DataReportChanged event. This property contains data from the sensor, and the event is triggered when they change. The task of our class is to use this data and deliver it to the user of our class in a convenient way. To do this, create another small class that will be engaged in the analysis of information from DataReport.

Experimentally, we find out that pressing button 1 generates code 1, pressing 2 generates code 2, pressing 3 generates code 4, and pressing 4 generates code 8. It can be seen that binary bits are used here. A code 0 is also generated if all the buttons are released. So we can write the following code.

[SensorDescription ("545C8BA5-B143-4545-868F-CA7FD986B4F6")]
public class SwitchArraySensor: Sensor
{
public class SwitchArraySensorData
{
private static Guid KeyStateProperyId = new Guid (@ "38564a7c-f2f2-49bb-9b2b-ba60f66a58df");

public SwitchArraySensorData (SensorReport report)
{
uint state = (uint) report.Values ​​[KeyStateProperyId] [0];
Button1Pressed = (state & 0x01)! = 0;
Button2Pressed = (state & 0x02)! = 0;
Button3Pressed = (state & 0x04)! = 0;
Button4Pressed = (state & 0x08)! = 0;
}
public bool Button1Pressed {get; private set; }
public bool Button2Pressed {get; private set; }
public bool Button3Pressed {get; private set; }
public bool Button4Pressed {get; private set; }
}
public SwitchArraySensorData Current
{
get {return new SwitchArraySensorData (DataReport); }
}

public event EventHandler StateChanged;

public SwitchArraySensor ()
{
DataReportChanged + = SwitchArraySensor_DataReportChanged;
}
void SwitchArraySensor_DataReportChanged (Sensor sender, EventArgs e)
{
if (StateChanged! = null)
{
StateChanged.Invoke (sender, e);
}
}
}


In fact, this class is a wrapper in the Sensor API for the sensor we need. To use it, I have to subscribe to the StateChanged event and get information through the Current property.

To get a list of available sensors of a given type, you can use the GetSensorsByTypeId method of the SensorManager class. At the same time, the TypeId of these sensors will be determined on the basis of the specified SensorDescription attribute. Now, using these sensors, we can subscribe to the necessary event and receive data in a convenient form for the application. For example, we can display a button click status on a form.

private void Window_Loaded (object sender, RoutedEventArgs e)
{
var sensors = SensorManager.GetSensorsByTypeId <SwitchArraySensor> ();
foreach (SwitchArraySensor sensor in sensors)
{
switch (sensor.FriendlyName)
{
case "Left Switch Array Sensor": sensor.StateChanged + = delegate (object leftSensor, EventArgs arg)
{
var buttons = ((SwitchArraySensor) leftSensor) .Current;
SwitchState (LeftButton1, buttons.Button1Pressed);
SwitchState (LeftButton2, buttons.Button2Pressed);
SwitchState (LeftButton3, buttons.Button3Pressed);
SwitchState (LeftButton4, buttons.Button4Pressed);
};
break;
case "Right Switch Array Sensor":
sensor.StateChanged + = delegate (object rightSensor, EventArgs arg)
{
var buttons = ((SwitchArraySensor) rightSensor) .Current;
SwitchState (RightButton1, buttons.Button1Pressed);
SwitchState (RightButton2, buttons.Button2Pressed);
SwitchState (RightButton3, buttons.Button3Pressed);
SwitchState (RightButton4, buttons.Button4Pressed);
};
break;
}
}
}


As a result, we get an application that looks like this.



Of course, the example with the implementation of such a sensor is quite synthetic. However, it clearly demonstrates the process of connecting the sensor to the Sensor API.

Also, if you need to implement your own driver for the device in order to connect to the Windows 7 Sensor and Location platform, I recommend that you contact the official resource.

Good luck to you in creating your context-sensitive applications!

Demo applications:
Ambient.zip
Accelerometer3D.zip
Combined.zip
ButtonSensor.zip

Source: https://habr.com/ru/post/63401/


All Articles