📜 ⬆️ ⬇️

Yujin: Fearless robot engineers, smart robot vacuum cleaners, empathetic robot teachers



Two weeks in Hackspace hosted a cute Korean robot iRobi (q) from the company Yujin , which manufactures iClebo Arte vacuum cleaning robots (for everyone who wishes), nurse robots (for medical purposes), engineer sappers (for military), nanny robots and robot teachers (for kids).

It is noteworthy that the method of simultaneous navigation and mapping (SLAM from the English. Simultaneous Location and Mapping) is almost the same for all Yujin robots (so you can buy a vacuum cleaner to understand how a military robot works).
')
Under the cut there is a short description of the Yujin robot line, some funny adventures of a smiling robot in Hackspace and some pictures of what is inside this robot educator.

SLAM


image

SLAM - Simultaneous Localization And Mapping - The method of simultaneous navigation and mapping is a method used by robots and autonomous vehicles to build a map in an unknown space or to update a map in a previously known space while simultaneously monitoring the current location and distance traveled.

The simultaneous navigation and mapping method (SLAM) is a concept that links two independent processes into a continuous cycle of successive calculations, in which the results of one process participate in the calculations of another process.

Mapping is the problem of integrating information gathered from the robot's sensors. In this process, the robot as if answers the question: "What does the world look like?"
The main aspects in the construction of the map are the presentation of environmental data and the interpretation of sensor data.
On the contrary, localization is the problem of locating the robot on a map. At the same time, the robot as if answers the question "Where am I?"
Localization can be divided into two types - local and global.
Local localization allows you to track the location of the robot on the map when its initial location is known, and global localization is determining the location of the robot in an unfamiliar place (for example, when a robot is stolen).

The robot faces the following tasks: a) map construction and b) localization of the robot on this map. In practice, these two tasks cannot be solved independently of each other. Before the robot can answer the question of what the environment looks like (based on a series of observations), it needs to know where these observations were made. At the same time, it is difficult to assess the current position of a robot without a map.
It turns out that SLAM is a typical chicken and egg problem: a map is necessary for localization, and localization is necessary for creating a map.

SLAM is implemented using several technologies: odometry (data from the wheels of the robot), 1d and 2d laser ranging meters, 3D High Definition LiDAR, 3D Flash LIDAR, 2d and 3d sonars, one or more 2d video cameras. There are also tactile systems SLAM (register touches), radar SLAM, wifi-SLAM. (There is also an exotic FootSLAM)

Articles on Habré on this topic
MIT has developed a real-time mapping system for rescuers ;
What does a robot need to build a card? ;
Robot Vacuum Test: iRobot Roomba 780, Moneual MR7700, iClebo Arte and Neato XV-11 .

Training video


Yujin Company


image
Yujin was founded in 1988 for industrial and military purposes.

Line of robots


image

Robot sapper (miner?)

image
more pictures of a military robot
image

image

image


Robot teacher


Interactive pronunciation teaching + character

Robotic wheelchairs

image

Cafero - robot waiter
image
(Instead of tip accepts batteries)

GoCart robot porter
image
For hotel and industry and health care

iRobi


TTX


Battery charge 3 hours
7 inch WVGA touchscreen
Voice recognition
1.3 megapixel camera
40 GB hard drive (for karaoke)
Cost - $ 4,598
2000 Korean kindergartens were equipped with similar robots
(now kindergartens are probably equipped with the game of League of Legends)

How much love in the eyes and timid blush:



Features
Video recognition (Camera)
The robot can recognize images and respond to faces or actions.
Voice recognition (Microphone)
Voice recognition
Emotion expression (Face LED)
The robot can "express" 5 emotions with LEDs (eyes, mouth and chanes)
Obstacle detection (Ultrasound sensor)
Ultrasound obstacle detection. The robot pauses if it finds a person or obstacle in its path.
Audio playback (Speaker)
Robot talking and playing music
Display & touch screen
(Now you won't surprise anyone with the touchscreen, but in 2007 it was cool)
Close obstacle detection (IR sensor)
IR sensor to avoid obstacles
Fall prevention (Floor detection sensor)
Avoid falls. The robot stops in front of the stairs and “cliffs” (in our case, he saved himself from falling from the table)
Collision detection (Bumper sensor)
The bumper is equipped with touch sensors. When triggered, the robot stops

The robot is equipped with a camera for building a map of the room. With it, he scans the ceiling, producing up to 24 frames per second, to form the exact structure of the room, determine the wall junctions and the location of partitions.

Device:

image
Spheres of application

Internet connection menu:


We are pumping the skill of going through all the buttons and learning Korean using a spear method

Main menu:


The top two rows of actions require an Internet connection, the bottom ones are autonomous. It seems that somewhere on the server (cloud?) Your account is created and it is even possible that there was a person-consultant

Photographs you when you photograph him.


A photo happens when you touch his hand.

With such a teacher you will quickly learn Korean.


(The big-eyed brain pleased)

Oh, these Korean cultural codes:


(Memory development game)

Karaoke mode:


(While the Korean song is playing, the robot goes and waves with its hands)

Reflection:



“Theory of Lie” by Paul Ekman also works for robots:



Insides



"Luke, I am your father"


Where do without her


Under the arm - hard drive

Touchscreen on the reverse side:



Back:



Wi-Fi module:



He didn’t go deep into his head, took off his ear and looked in - there are motors and wires from the camera:



IR sensor:



Funny cases


One day, with a cry, “I want to eat, I’m going to look for you at the socket” (he spoke in English, an approximate translation), the robot rushed to meet the adventures, and since he was standing on the table, we realized that now it will happen ... But no , he drove to the edge, timidly leaned out a couple of centimeters, said, "Well, you nafig" (the translation is not literal, but the meaning is approximately the same), went into sleep mode.

When I shoved the robot into the box, imperceptibly I pressed the “On” key, and since it loads for 20-30 seconds, I calmly continued packing it, but then he suddenly came to life, moved his arms and started mumbling something. My heart sank, I immediately remembered shoving a kitten into a bag.

Azimov was right that humanity will have problems with the emotional perception of robots.

Source: https://habr.com/ru/post/238887/


All Articles