📜 ⬆️ ⬇️

Create a standalone drone on Intel Edison


We continue to talk about how to independently make an autonomous flying machine. Last time we talked about the element base, mechanics and control, we also equipped our device with mind chats based on the OpenCV library. Time to move on - we need to teach our gadget more subtle and confused things, in other words - to increase its intelligence.
In the second article of the cycle, our colleague from Intel, Paul Guermonprez, offers to change the platform and see what the drone based on the Intel Edison computer can achieve and most importantly how to do it. Well, at the very end of the post proposal for those who set about trying to translate everything said in practice. We assure you that under certain conditions it is quite possible to get a hardware platform for experiments from Intel for free!

Maps and measurements

Civilian drones achieve good results in recording high-definition video and in stabilizing the flight. You can even plan automatic flight routes on your tablet. But there is still a serious problem: how to prevent the collision of small drones, in flight, with people and with buildings? Large drones are equipped with radars and transceivers. They fly in controlled areas, away from obstacles. They are remotely controlled by pilots using cameras. But small civilian drones should be close to the ground, where there are many obstacles. But pilots can not always get a video image or can be distracted from flight control. Therefore, it is necessary to develop collision avoidance systems.
  1. You can use single beam sonar. Simple drones are often equipped with vertical sonar to maintain a stable altitude at a short distance from the ground. Sonar data is extremely easy to interpret: you get the distance in millimeters from the sensor to the obstacle. Similar systems exist in large aircraft, but directed horizontally: they serve to detect obstacles ahead. But this information alone is not enough for the flight. It is not enough to know whether there is any obstacle right in front or not. It is like extending an arm in the dark.
  2. You can use an advanced sensor. A sonar beam gives information about one distance, like 1 pixel. When using the radar horizontal view you get full information: 1 distance from all directions. The development of a one-dimensional sensor is a two-dimensional sensor. In this family of sensors, you may know Microsoft Kinect. Intel has Intel RealSense. From this sensor you get a two-dimensional distance matrix. Each pixel is a distance. It looks like a black and white bitmap, where the black pixels are closely spaced objects, and the white dots are objects far away. Therefore, it is very easy to work with such data: if in a certain direction you see a group of dark pixels, it means that there is an object there. The range of such systems is limited: sensors the size of a webcam have a range of 2-3 meters, larger sensors - up to 5 meters. Therefore, such sensors can be useful on drones for detecting obstacles at a short distance (and when moving at a low speed), but they will not allow detecting obstacles at a distance of 100 m. Samples of such drones were shown at CES: .


So, single-beam sonar is ideal for measuring the distance to the earth at low altitude. Sensors such as RealSense are good for short range. But what if you need to see and analyze the volume objects ahead? You need computer vision and artificial intelligence!

To detect three-dimensional objects, you can either obtain a three-dimensional image using two images from two adjacent webcams (stereoscopic vision), or use successive images taken while moving.
')

Our autonomous drone project

In our project we use the camera Intel-Creative Senz3D . This device includes a two-dimensional distance sensor (with a resolution of 320 x 240), an ordinary webcam, two microphones and accelerometers. We use a distance sensor to detect objects at short range and a webcam for long range.



In this article, we consider mainly long-range computer vision, rather than two-dimensional information about the depth of short-range. Therefore, the same code can be run with a cheap five-dollar webcam with Linux instead of the powerful multipurpose Senz3D. But if you need and computer vision, and data on the depth, the camera Senz3D is just perfect.

We chose Intel Edison as a built-in drone platform. In the previous project, we used a full-featured smartphone with Android as a built-in computing platform, but Edison is smaller and cheaper.
Edison is not so much a processor as a full-featured Linux computer: a dual-core Intel Atom processor, RAM, a WiFi module, Bluetooth, and much more. It is only necessary to select an expansion card for it, relying on the requirements for input-output, and connect it. In this case, we need to connect the Senz3D USB camera interface, so we use a large expansion card. But there are tiny boards with only the components you need, and you can easily create the necessary board yourself. This is just an expansion card, not a motherboard.



Installation

OS We unpacked Edison, updated the firmware, and saved Yocto Linux OS. Other Linux flavors are available with many already compiled packages, but we need to install simple software with as few dependencies as possible, so Yocto is fine with us.

Software. We set up WiFi and get access to the board via ssh. We will edit the source code files and compile them directly on the board via ssh. You can also compile on a PC with Linux and transfer the binaries. If you compile code for the 32-bit i686 architecture, it will work on the Edison platform. Just install the gcc tool chain and your favorite source editor.

Camera. The first difficulty is using a PerC camera with Linux. This camera was designed for use with Windows. Fortunately, the company that released the sensor itself provides a driver for Linux: SoftKinetic DepthSense 325 .
This is a binary driver, but a version compiled for Intel processors is available. Using Intel Edison, you can compile in place using gcc or Intel Compiler, but you can also get binaries compiled for Intel and deploy them without modification. In such situations, an important advantage is compatibility with Intel. After solving several problems with dependencies and layout, our driver is ready and running. Now we can get the image from the camera.

Sensor data Depth data is very simple, easy to get from the sensor and easy to analyze. This is a depth information matrix, its resolution is 320 x 240. The depth is encoded as grayscale. Black pixels are close, white ones are far away. The video component of the sensor is a regular webcam. From the point of view of the developer, the sensor gives us 2 webcams: one is black and white, it returns depth data, the other is color, it gives a normal video image.
We will use video to detect obstacles in the distance. Only light is needed, and the distance is not limited. Depth information will be used to detect obstacles at a very close distance from drones, no more than 2–3 m.

Security Notice. The photos show that we work in the laboratory and simulate the flight of the drone. The proposed algorithm is still far from ready, so you should not launch the drone in flight, especially when there are people nearby. First of all, in some countries it is absolutely rightly prohibited. But more importantly, it is simply dangerous! The many videos taken by drones that you see on the Internet (taken both indoors and outdoors) are in fact very dangerous. We also send drones to fly on the street, but under completely different conditions in order to comply with all the requirements of French law and avoid accidents.

Code

So, we have two sensors. One returns depth data. You do not need our help to process this data: black is close, white is far. The sensor is very accurate (accuracy of 1–3 mm) and works with extremely low latency. As we said above, this is all fine, but the range is too small.
The second sensor is a regular webcam. And here we have a new difficulty. How to get extensive information from a webcam? In our case, the drone flies along a relatively straight path. Therefore, you can analyze two consecutive images to detect differences between them.
We get all the important points on each image and try to correlate the difference between the positions of these points to get vectors. In the photo result, when the drone is in a chair and slowly moving through the laboratory.



Small vectors are green, long vectors are red. This means that if between two consecutive snapshots the position of a point changes quickly, then it is red, and if slowly, it is green. It is clear that the errors still remain, but the result is already quite acceptable.
It's like in Star Trek. Remember how the starry sky “stretches” there when the ship moves faster than the speed of light? Stars that are close, turn into long white vectors. Farther stars form short vectors.

Then we filter the vectors. If the vectors (any: both large and small) are on the side, there is no risk of collision. In the test photo, we find two black suitcases on the side, but we fly right between the masses. There is no risk.
If the vector is directly in front of you, it means that a collision is possible. A short vector means you still have time. A long vector means that there is no time.



Example. In the previous photo, the suitcases were close, but on the side, that was safe. The objects in the background were ahead, but far away, also safe. In the second shot, the objects are already directly in front of us, with large vectors: this is already dangerous.

Download project materials from here .

results

On the set of equipment described above, we demonstrated 4 theses.

It's time to act!

Now how to get a free experiment kit from Intel. The company continues the academic program Intel Do-It-Yourself Challenge and now we have 10 sets that we will be happy to share with you - subject to certain conditions. Namely:

In this case, you need to formulate your idea, briefly talk about your creative team and send the resulting text in English or French in the name of Paul Guermonprez .

Reference materials and resources


Source: https://habr.com/ru/post/249603/


All Articles