At the beginning of April, Arnav Kapoor, a researcher at the Massachusetts Institute of Technology, at the age of twenty-four, posted a
short video on YouTube. The video shows how he walks around the campus, moving from one location to another; On the right side of his face he has a white plastic fixture.
First, he passes by a row of bicycles parked near the melting snowdrifts, his lips are closed, and not voiced thoughts are highlighted on the screen. The inscription appears: "Time?", And the male voice replies: "Ten hours and thirty-five minutes." In the next scene, Kapoor is shopping at a local store. The price of each item that he throws in the basket (toilet paper, sandwich in Italian, canned peaches) is displayed on the screen. "The total amount is $ 10.07," the male voice responds. In the last scene, Kapoor moves the cursor across the screen, according to all signs by the power of thought.
Kapoor came from New Delhi to work at the Massachusetts Institute of Technology's Media Lab and create wearable devices that seamlessly integrate technology into our daily lives. In order not to reach for the phone any more, not to stand staring at the screen, not to go with lowered eyes and not to fall out of reality in order to join the process.
')
This sounds improbable, but AlterEgo - a device that works silently, without voice control and headphones, which Kapoor has been developing for the last two years - now reads his thoughts so successfully that he can order a taxi in Uber without saying a single word.
The current version of the device (Kapoor created it in collaboration with his brother Shreyya, a student at the same institute, several colleagues from the Fluid Interfaces department and his mentor, Professor Patti Maes) is a device printed on a 3D printer equipped with electro-electromagnetic sensors. It fits tightly to the jaw on one side of the face and, using Bluetooth, establishes a connection with what Maes calls our computer brain - the colossal network of information that we access up to 80 times a day using smartphones.
This invention can be considered revolutionary for the reason that it does not require deep introduction (that is, implants) and is capable of processing non-verbal signals of human communication with an extremely high degree of accuracy. Kapoor promises that in the future it will also become almost invisible to others.
*
A few months after the video was published, Kapur gave an interview to the Medium team in a small office, where he worked together with other researchers, on the fifth floor in the Media Lab building. He is clean-shaven, neatly dressed and lean in his student body; the look seems to be either sleepy or scorchingly fixed — it makes an impression. Among the rubble of books and details in the office one can see a pink ukulele, as he claims, not his.
By nature, Kapoor is prone to verbosity, but since his invention has begun to attract the attention of the press, he has clearly begun to sharpen his narrative. “Artificial intelligence is my passion,” he says. “I believe that the future of mankind rests on collaboration with computers.”
Since smartphones have entered the market, two and a half billion people have already begun to resort to using computer brain when they need to go somewhere, prepare something, contact someone, or recall the capital of Missouri. Cognitive reinforcement in the form of technology has become an integral part of our lives. There is an organic brain, but there is a computer one. According to Kapoor, they are already working together now, simply not as efficiently as they could.
However, modern devices are designed so that they distract us rather than provide assistance. To find the necessary information in a limitless world, which is always at hand, we have to give the process all our attention. Screens require eye contact, working with the phone, you have to wear headphones. Devices drag us from physical reality to our own.
Kapoor wants to perfect the device that allows people to interact with artificial intelligence as intuitively as the right hemisphere interacts with the left so that we can integrate the possibilities that the Internet gives into our thinking process at different levels. “This is what our life will look like in the future,” he says.
Early design optionWhile working on the AlterEgo design concept, Kapoor was guided by several principles. The device should not require the introduction of any elements into the body: according to the researcher, this is inconvenient and not applicable on a large scale. Interaction with it should be felt natural and occur unnoticed by others - accordingly, the device should be able to read non-verbal signals. Clearly aware of how easy it is to apply this technology for unseemly purposes, he also wanted the user’s ability to control the process to be embedded in the design itself, that is, only intentional signals, and not unconscious ones, were captured. In other words, the device should read your thoughts only when you yourself want to share them.
Other pioneers in this field have already developed interfaces for communication between humans and computers, but there have always been some limitations. To communicate with Siri or Alexa you need to loudly turn to the car, which seems unnatural and does not allow to preserve privacy. The spread of this technology is hampered by the intrusive fear that with such devices you can never be sure who will overhear us and what exactly they will hear.
Kapoor had to come up with a way out of this situation. What if the computer would learn to read our thoughts?
*
As a researcher who “tried himself in different disciplines” (he once tried to briefly write about himself for the site but failed to - did not want to lock himself in one specialty), Kapoor began to perceive the human body not as a set of restrictions , but as a conductor. He saw it this way: the brain is the power source for a complex electrical neural network that controls our thoughts and movements. Let's say that when the brain needs us to move a finger, it sends an electrical impulse along the arm to the desired point, and the muscles react accordingly. Sensors ways to capture these electrical signals - it remains only to determine where and how to connect to the process.
Kapoor knew that while reading to himself, our internal articulation muscles were in motion, unconsciously reproducing the words we see. “When you speak out loud, the brain sends impulses-instructions to more than a hundred muscles of the vocal apparatus,” he explains. Internal vocalization - that is, what we do, reading to ourselves - this is the same process, only much less pronounced: the neural signals come only to the internal muscles of the vocal apparatus. This habit develops from people when they are just learning to read, speaking out loud the letters, and then the words. In the future, this can interfere - quick reading courses often pay special attention to weaning people to pronounce words in their heads when they run through the eyes on the text.
These neurosignals, first recorded in the middle of the 19th century, are the only physical expression of intellectual activity that we know today.
Kapoor wondered whether the detectors were able to detect the physical manifestations of the inner monologue — microscopic electrical discharges emanating from the brain — through the skin of the face, despite the fact that the muscles involved are much deeper in the mouth and throat. And despite the fact that they do not work fully.
Identify points of contactIn its prototypical form, AlterEgo was a frame that fastened 30 sensors to the face and jaws of an object, so that they could read the neuromuscular movements. The object meanwhile uttered to himself the necessary messages. The team has developed special programs for analyzing signals and translating them into specific words.
One problem remained: for the first time, AlterEgo sensors did not detect anything at all.
Having written the software and assembled the device, Kapoor hoped for the best, but the myoelectric signals that the internal speech generated were extremely weak. At that moment it would be very easy to abandon this idea. “But we wanted to intercept the interaction as close as possible to the stage of pure thought,” explains Kapoor. He moved the sensors to different parts of the face, made them more sensitive, reconfigured the programs - everything is useless.
One evening, the brothers tested the device in their apartment in Cambridge. Kapoor put it on himself, and Shreya watched the situation on the computer screen. They set up the device so that it transmitted signals in real time, so that Shreyya could accurately determine the moment when something was considered, if it ever happened.
It went to the night. Kapoor for about two hours had a silent conversation with the device. So far it has been programmed to interpret two words, “yes” and “no”, and this has not brought any significant results. But then Shreya thought he saw something. On the screen something flashed.
“We could not believe our eyes,” says Kapoor. He turned his back on his brother and repeated the procedure. “The jump in the signal was repeated time after time, but we thought it was just a fault in the wires. We were sure that everything was due to interference in the system. ” Did they really see something worthwhile? After a full hour of endless tests, Kapoor made sure that contact was made.
“We’ve almost gone crazy,” he says. The next day, the event was celebrated with pizza.
*
It took Kapoor and his colleagues two years to create the hardware and software for AlterEgo. The device was designed to be worn without inconvenience, the team improved the sensors and revised the contact points to make the shell compact and not too eye-catching. Kapoor refused headphones, which, in his opinion, violate the normal course of human life; instead, he developed an acoustic system based on bone conduction. The device whispers answers to requests, like some kind of super-guardian angel.

When the device began to recognize myoelectric pulses, Kapur focused on collecting the amount of data on the basis of which AlterEgo could be taught to compare characteristic signals with certain words. It was a laborious process: I had to sit in the laboratory for a long time with a device on my face, repeating to myself the right words until the computer mastered them. At the moment, AlterEgo has a vocabulary of 100 words, including the names of numbers from 1 to 9 and the commands: “add”, “take away”, “answer”, “call”.
From the video on YouTube, it seemed as if the device was reading Kapoor’s thoughts, so it wasn’t without an indicative panic. “In fact, it’s very scary that someone else can now get access to what we think,” wrote a worried commentator about an
article about this technology . “With this technology, police thoughts can become a reality.”
Kapoor and Maes, an expert in the field of AI, are very sensitive to such ethical issues. Kapoor believes that he, as the creator of the technology, has the ability to prevent its use for immoral purposes by integrating fuses directly into the concept. Kapoor emphasizes that AlterEgo cannot literally read minds and will never acquire such an opportunity. He deliberately created a system that responds only to signals that are intentionally given — that is, voluntary communication. In order to interact with the computer brain, you must yourself want to transfer this or that information to it. This is the difference between AlterEgo and, say, Google Glass. Also, the device does not have a camera, because Kapoor wants his wearable devices to have only the data that you actively transfer to them.
“By itself, artificial intelligence does not harm anyone, but one should not hush up the fact that this technology can be turned into evil,” says Kapoor. “So we try to ensure that our devices comply with the principles that we adhere to. That is why we developed AlterEgo from scratch on our own - we had a definite idea of ​​what should happen, and we wanted people to use it as intended. ”
Kapoor, who worked on a number of projects with the Harvard Medical School, is primarily seeking to make life easier for those who have health problems. For example, people with Alzheimer's disease could wear this device to compensate for memory impairments. At the same time, thanks to his ability to read neural micro signals, he could assist in interacting with the outside world to those who suffer from physical impairments - deaf and dumb, had a stroke, susceptible to Charcot's disease or autism.
To bring AlterEgo to a really working state, Kapoor has to train him for a long time, expanding his vocabulary far beyond a hundred words. In addition, he will need to collect enough data to make sure that the device will work on any head and with any internal monologue. At the same time, he is convinced that the technology is so good that, sooner or later, it will learn to synthesize information and extrapolate the meaning of new words from the context.
*
In the sparkling modern cabinets of Media Lab, it is very easy to allow yourself to be carried away by the picture of a shining bright future, when we use two brains without a hitch — the one with which we were born, and the computer to which we voluntarily tied ourselves.
Maez presents a whole mass of hypothetical examples of how an ideally integrated AI system could change our lives if programs were created to expand our capabilities and not just entertain us. She says that such technologies can fulfill many of our dreams. (She is rightly considered an IT-mentor with a utopian bias - this attitude, along with other considerations, attracts ambitious students like Kapoor to the Massachusetts Institute of Technology). AlterEgo could teach us foreign languages, describing the surrounding reality of their means and in real time, or smooth out rough spots in communication, suggesting the names and basic information about the people with whom we say hello.
Then, Maes, as if by a signal, sharply departs from the concept of pure fusion of minds proposed by Kapoor. If we provide channels for collecting physiological information (pulse, sweating, body temperature), the device could predict our behavior and quietly lead us to actions that would allow us to achieve our intended goals. He could catch that we begin to doze off at work and begin to emit the invigorating smell of mint. He could have corrected our behavior, having met an attempt to take the third cake with the smell of rotten eggs. He could determine that we are nervous, and speak to us with words of encouragement, inaudible to others. This path of development is significantly different from what the Maez student offers - he is more focused on shaping the desired model of behavior and offers more opportunities for monetization. Maes seems to suggest that if we incorporate the AI ​​and all the information that the worldwide network possesses into our conscious thinking, then we could finally shed those extra five kilos.
It is easy to imagine that in a few years, Kapoor’s invention can turn into an idea that will bring billions, and what consequences this will entail for the defense industry and technical giants like Facebook and Amazon. Another thing is less obvious: whose intellectual property is AlterEgo? Kapoor himself answers this question evasively. According to him, if he decided to leave the institute, he could take with him all the developments, but at the moment he does not plan anything like that. He intends to remain in science and refine the invention, which, in his opinion, can benefit mankind, instead of simply selling it at a higher price, to whom it will be necessary. This is his child, and he wants to go with him all the way to the end.
But what if someone splits his technical solutions, assembles his version of the device and creates another start-up unicorn without his participation? “I don’t know what to say to you,” says Kapoor with an impenetrable face, shrugging his shoulders.