📜 ⬆️ ⬇️

AI, Big Data and Disinformation Technologies



/ photo KamiPhuc CC

Usually in our blog we talk about cloud services , hosting and related technologies. Today we will talk about the complexities of technology development in general, artificial intelligence, big data, and Michael Jordan (not a basketball player).
')
Michael Jordan , Professor Emeritus at the University of California at Berkeley and IEEE participant. Jordan is one of the most respected and authoritative people, a world expert in machine learning. He is confident that the excessive use of big data will not give the expected effect and lead to disasters like the massive collapse of bridges.

Let's try to understand this topic better. Look at the definition of the term "artificial intelligence" for the authorship of the creator of the language Lisp John McCarthy. In the article of the same name (“What is artificial intelligence?”), He stressed that AI is associated with the task of using computers to understand how human intelligence works, but is not limited to using the methods observed in biology.

Of course, this interpretation is clearly far from our ideas about the futuristic image of AI. Journalist Gomez and Jordan in their conversation confirm this idea and emphasize the presence of a kind of misinformation, which is beneficial to various media operating on the wave of the growing popularity of this topic.

Michael appeals to the experience of researching neural networks, which have been talked about at every turn since the 1980s, while repeating what was known in the 1960s. Today, the main idea is a convolutional neural network, but this is not about neurology. People are convinced of the need to understand how the human brain processes information, learns and makes decisions, but in fact science is developing in a slightly different direction.

Jordan says neuroscience will take dozens and even hundreds of years to understand the underlying principles of the brain. Today we are only close to the beginning of the study of the principles of representation, storage and processing of information by neurons. We have almost no understanding of how learning actually occurs in our brain. Although for similar analogies its place. So, people began to search for metaphors related to the parallel work of the brain, which turned out to be useful for developing algorithms, but practically did not go beyond the level of searching for fresh solutions and ideas.

If we continue the consideration of the terms, we will see that the “neurons” involved in in-depth training are a metaphor (or, to put it in Jordan language, generally a “caricature” of brain work), which is used only for brevity and convenience. In reality, the work of the mechanisms of the same in-depth training is much closer to the procedure of building a statistical model of logistic regression than to the work of real neurons.

John McCarthy, in turn, stressed: the problem is not only to create a system in the image and likeness of human intelligence, but that scientists themselves do not agree on what he (the intellect) is and what specific processes are responsible. To say that we can “exactly recreate” this architecture and make it work is extremely unlikely in the near future.

Big data can be another trick of the media, which thousands of researchers have peaked around the world. Modern obsession with big data can lead to uncontrolled use of conclusions drawn from data with controversial statistical strength.

For any single database, you can find a combination of columns, which is completely random, but will accurately answer any hypothesis that must be considered to solve a particular problem. Given the presence of millions of attributes for a particular object and a virtually infinite number of combinations of these attributes, it all begins to resemble a joke about Shakespeare, a typewriter, and a million monkeys.

Of course, there are many ideas to control research, allowing you to find out how often errors occurred in such hypotheses. But the use of mathematical and technical tools takes a long time, and we are still learning how to handle big data.

In science and new areas of knowledge, the boundaries and frameworks for research are one of the elements necessary for progress. This statement supports both the story with the first systems of technical vision (face recognition), and the example with speech technologies (recognition of individual words).

PS We try to share not only our own experience with the service of providing virtual infrastructure 1cloud , but also to talk about various studies and researchers who deal with related areas of knowledge.

Do not forget to subscribe to our blog on Habré, friends!

Source: https://habr.com/ru/post/258219/


All Articles