Future and present of neuroimplants and neural interfaces
A small note about the principles of operation, current technical limitations and the possible future of neuroimplants in particular and neural interfaces in general.
I think for most it is no secret that neuroimplant is already a reality. For example: Cochlear implant . There are bionic prostheses and implants for restoring vision.
Very impressive videos with Nightgirl Ekland, naturally Deus EX: ')
But on the other hand, with such obvious successes in prosthetics, why in terms of transmitting information through a neural interface, the current level is “to transmit only a few bits of information per minute” ? It would seem that in real time they learned how to manage a bionic prosthesis (and even with feedback in the form of sensations), then all of a sudden — could I barely manage to convey information? Such a significant difference arises from the fact that in one case the work goes with the peripheral and in the other with the central nervous systems.
In the first case, everything is relatively simple. There are bundles of nerves and a fairly good model (understanding) of their work. In the case of muscles, there is an impulse, there is a contraction of a muscle, and vice versa. The transfer of sensations is also “simple”, by joining the existing nerves as “ports”. With cochlear / visual implants - about the same.
It's not difficult to “ring out” the nerves and learn to read / send signals.
But for creating a “neural interface” this is of little use. Just because we need an additional interface, but in this case, when connecting via peripheral NS, there are no free “ports”. It is not a particular problem to create a rather serious neural interface right now by connecting to the nerves of the limb (just like connecting bionic prostheses). We'll have to spend time learning, but in principle you can relearn and get used to. Muscle contraction pulses will encode arbitrary information. For example, symbols are the “blind, fingerless” printing method.
But of course, almost no one wants to sacrifice limbs or other parts of the body for the sake of such an interface ...
Therefore, basically they are now trying to screw the wheel to the dog - using the EEG and translating the patterns of brain activity into commands / information.
By and large, this approach is either dead-end or very niche.
The fact is that the "resolution" of the EEG method is orders of magnitude lower than what the information is encoded by the brain. If we exaggerate strongly, then the state of the neuron can be compared with byte 0 — not active, 1 — active (it transmits an electric pulse). In fact, the neuron and the processes taking place are much more complicated, but such a simplification will suffice for now. Those. the coding of an abstract idea, provided that we can influence and perceive the state of individual neurons - tens / hundreds / thousands of neurons will be needed. It is difficult to say specifically because of the lack of sufficient understanding of the processes, but it may well turn out that the specialization of tens or hundreds of neurons is enough to create a sustainable specific skill. Skill at the level of "submit a unique command for the computer." Just as we can command a finger to press a letter on the keyboard, with the same degree of accuracy and inerrancy. And with minimal energy. Ideally, the cost of "mental energy" to transfer a symbol will be many times less than pressing a key with your finger. Due to the fact that it will not be necessary to activate motor neurons and peripherals.
Returning to the EEG - with its help potentials are removed from the scalp 16-24-32 electrodes. Feel the difference in scale. The number of neurons in the cortex and miserable tens of electrodes. EEG, as well as MRI, give only a very general picture of the work areas of the brain, their activity. And there may still be enough permission to say if a person is sleeping or awake, to assess the level of attention. Evaluate the total activity of a fairly large volume of cells.
The obvious way out is to record the electrical activity of specific neurons. If it is possible to stably fix the individual activity of a certain number of neurons in the cortex, one can learn to send any commands. Create interfaces that are as complex, diverse and docile as your own voice / movement. For neurons, there is no particular difference - they have two states active / inactive. And in the receiving device it is not difficult to prescribe the correspondence of the activity of specific neurons to any machine code. Feedback is easy to organize in the usual ways - through an image on a monitor, etc.
And if it is possible to control the activity of arbitrary neurons, one can, on the contrary, “send information to the brain”. This, however, is much more complicated. In contrast to the first method, much more knowledge about the individual structural features of each individual brain will be needed here.
By the way, the activity of individual neurons has already been learned. And act on them too. The problem is that for this you have to literally stick a very thin electrode into the brain. For obvious reasons, people do not. Therefore, I think that no one has yet been able to check whether “several neurons are enough to create a neural interface”. Usually such methods test the theory of the brain in animals.
Other cons: At a time, it turns out to explore only a few neurons in a few weeks. Neurons from contact with the electrode can be mechanically destroyed.
In terms of human neuroimplants:
too few electrodes;
the need for brain surgery;
lack of knowledge.
In general, potentially real now. But difficult, dangerous to health and fraught with ethical problems.
Other potential areas:
Nanotechnology. If you manage to make a certain nanoscale sensor, attach it to a neuron and read the pulses - then you will not need to poke an electrode into the brain, opening the skull. In addition to the problem of creating a nanobot, you will have to solve the problem of its targeted delivery. Not the near future;
Improved scanning. Learn to scan the activity of individual neurons in the cerebral cortex. Given the current size and capabilities of MRI - obviously not the near future.
This is the real neuroimplants:
Bionics (neuroimplants of the peripheral NS) is already impressive and will develop rapidly.Rapid progress, there are no fundamental problems, knowledge is sufficient.
Neuroimplants of the central NA - from the category of "fiction" moved into the category of "real, but at a different level of progress."It is necessary to improve medical technology and the development of knowledge of psychophysiology.
Neural interfaces as a whole are based on EEG, MRI is a dead-end branch, without creating a fundamentally different level of scanning devices.As interfaces for displaying the state of consciousness and biofeedback, however, are fully justified.