“Dave, stand still! Will you stop? Stop, Dave! Will you stop? Dave? ”So the EAL supercomputer begged the inexorable astronaut Dave Bowman in the famous and poignant scene near the end of the Stanley Kubrick movie“ 2001: A Space Odyssey. ” Bowman, being on the verge of death in deep space due to the incorrect operation of the machine, quietly and coolly turns off the memory circuits that are responsible for artificial intelligence.
“Dave, my mind is dying,” said EAL with a faint hope. “I can feel it. I'm feeling it."
I feel it too. For several years now I have felt the feeling that someone or something is gardening in my head, redistributing my neural networks, reprogramming my memory. My mind doesn't die - as long as I can talk - but it changes. I don't think in the way I used before. This is especially felt during the reading. Immersing in a book or a long article I used to relax and rest. My mind was caught by the narration or the turns of argument and I could walk for hours on the prostrate field of prose. Now this is a rare case. Now my concentration begins to drift after two or three pages. I become fussy, lose the thread, start looking for something else. I feel like every time I push wayward brains back to the text. Deep reading, which was natural to me before, turned into a struggle.
I think I know what's going on. For more than ten days, I spend a lot of time on-line, I search and surf, and sometimes I visit large Internet databases. For me as a writer, the world wide web is like manna from heaven. Studies that used to occupy a whole day, spent in piles of newspapers, in the periodicals in libraries, can now be done in minutes. A few requests to Google, a few clicks on the links and I get a scandalous fact or expressive quotation. Even when I’m not working, I’ll probably scour the informers, write e-mails, browse headlines and blogs, watch videos and listen to podcasts, or just disconnect ... from link to link, from link to link (unlike from footnotes that only occasionally become links, hyperlinks are not just a point leading to additional work, but also stimulating it).
')
For me, as for others, the Internet has become a universal tool, a conductor of a heap of information that passes through my eyes, through my ears, into my mind. The benefits of having immediate access to such an incredibly rich information store are many, they are well-planned and applauded. "Improving Silicone Memory Response," wrote Wired's Clive Thompson, "can be a great blessing to the mind." But for this benefit you have to pay. As the theorist Marshall McLuhan noted in 1960, media is not just passive channels of information transfer. They supply raw materials for thought, but they also form the very process of thinking. And that the Internet seems to shatter my ability to concentrate and contemplate. My mind is waiting for information in the form in which the Network delivers it: in a rapidly moving stream of particles. Once I swam in a sea of words. Now I slide on the surface like water skiing.
I'm not the only one. When I mention my reading problems to friends and literary acquaintances, many of them say that they have similar experiences. They are increasingly using the Web, more and more they have to fight with themselves in order to focus on large pieces of text. Some of the bloggers I follow have also begun to mention this phenomenon.
Scott Karp, who runs a blog about online media , recently admitted that he completely stopped reading books. “I was a literary enlightened man in college and a voracious bookworm,” he writes. “What happened?” He thinks about the answer: “What will happen if I limit reading on the Internet because I began to read differently, that is, it would be convenient for me, but I began to think differently?”
Bruce Friedman, who regularly writes on his blog about the use of computers in medicine, also describes how the Internet has changed his mental habits. “I almost completely lost the ability to read and understand a long article on the Internet or in print,” he wrote at the beginning of the year. A pathologist who spent many years at the Faculty of Medicine at the University of Michigan, Friedman commented on this in a telephone conversation with me. He thinks, in his words, in the style of "staccato", reflecting the meaning, he scans short text transitions from many online sources. “I cannot read War and Peace again,” he noted. “I lost the ability to do this. Even a blog post longer than three or four paragraphs is more than I can swallow. I miss it. ”
The stories themselves prove nothing. And we are still waiting for lengthy neurological and psychological experiments that reflect the real picture of how the Internet influences the acquisition of knowledge. But a recently published study of online habits conducted by students at the University of London shows that we feel better in the midst of a sea of diversity in reading and thinking. At one stage of the five-year study, students studied the behavioral journals of visitors to two popular search engines, the first of which operated with the British Library, and the other with a British educational consortium that provided access to articles from journals, e-books and other handwritten information resources. Researchers have found that people use the site of the "de-activity" exponentially, moving from one resource to another and rarely return to where they were already. They usually read no more than one or two pages of an article or book before the “jump” to another site. Sometimes they kept a long article, but there is no guarantee that they will ever return to still read it. The authors of the study noted:
“It is clear that users do not read online in the generally accepted sense; in fact, they subscribe to the fact that the formation of a new “reading” arises as a result of “powerful browsing” (in the original - power browse) of titles, content of pages and annotations in order to quickly find something for themselves, to get a prize. It almost seems that they go to the Internet to avoid reading in the traditional perception of this. "
Thanks to the ubiquity of text on the Internet, not to mention the popularity of SMS among cell phone users, we can read much better and more than we did in the 70s or 80s, when television was the main media. But this is a completely different kind of reading, and behind it a completely different kind of thinking is possible; even a new sense of self-consciousness is possible. “We are not only what we read,” explains Mariann Wolf, an experienced psychologist at Taft University and the author of the book Proust and the Squid. “We are what we read.” Wolfe is worried that the web-promoted reading style, a style that puts "efficiency" and "urgency" above everything else, can weaken our deep reading skills, which came about when the earlier technology — the printing press — did long and complicated publishing work. prose simple. She says that when we read online, we tend to become “simple information decoders.” Our ability to interpret text to create rich mental connections that form when we read deeply and without distraction remains largely untapped.
Reading, as Wolff explains, is not an instinctive skill of people. It is not engraved in our genes. We need to teach our brain how to translate the symbols that we see into the language that we understand. And the media or other technologies that we use in the study and practice of the craft of reading play an important role in the formation of the reflex arcs of our brain.
Experiments demonstrate that readers of hieroglyphs, for example, Chinese, have a slightly different neural network, rather than readers, whose written language uses the alphabet. The difference is found in many areas of the brain, including those that control such essential cognitive functions as memory and interpretation of visual and auditory signals. We can also see that the neural networks that the Internet creates will differ from those produced by our reading of books and other printed works.
And in 1882, Friedrich Nietzsche bought a typewriter, the Malling-Hansen Writing Ball, as his vision began to fail him, and keeping his eyes on the page exhausted him and caused pain, often bringing crushing headaches. He was forced to reduce the time spent writing, and he was afraid that he would soon have to quit his job. The typewriter saved him, at least for a while. As soon as he coped with typing blindly, he was able to write with his eyes closed, using only the tips of his fingers. The words could flow again from his brain to the page.
But the car produced a more subtle effect on his work. One of Nietzsche's friends, the composer, noticed a change in the style of his writing. His brief prose has become even more concise, more telegraphic. “It’s possible to use this tool will even lead to new idioms,” his friend wrote in a letter, noting that, in his own work, his “thoughts” in music and language often depend on the quality of the pen and paper. ”
“You are right,” Nietzsche replies, “our writing equipment takes part in shaping our thoughts.” As German media researcher Friedrich A. Kittler writes, "Under the influence of Nietzsche's prose machine," changed from arguments to aphorisms, from thoughts to a word-play, from rhetoric to a telegram style. "
The human brain is almost infinitely plastic. People used to think that our mental network, the dense connections formed in our skulls from about 100 billion neurons, were pretty much already formed by the time we reached adulthood. But brain researchers have found that this is not the case. James Olds, a professor of neuroscience who runs the Krasnov Institute of Special Studies at George Mason University, says that even an adult brain is “very flexible”. Nerve cells usually destroy old connections and form new ones. “The brain,” according to Olds, “has the ability to reprogram itself on the fly, changing the way it functions.”
Since we use what sociologist Daniel Bell called our “intellectual technologies” - tools that expand our mental rather than physical powers - we inevitably begin to borrow the qualities of these technologies. Mechanical clocks, which came into common use in the 14th century, provide a good example. In the journal Technics and Civilization, historian and cultural critic Lewis Mumford described how the clock “separated time from human events and helped create faith in the independent world of mathematically measurable sequences.” The “abstract structure of divided time” has become “a point to which both actions and thoughts referred.”
The methodical ticking of the clock helped to create a scientific mind and a scientific person. But it also destroyed something. As Computer Programmer and Human Reason noted in his book: From 1976, programmer Joseph Weizenbom from a Massachusetts Institute of Technology, the concept of the world, which emerged as a result of the widespread distribution of time accounting tools, remains a scanty version of the older concept. relies on the rejection of the direct experience that formed the basis for the old reality. "In deciding when to eat, when to work, when to sleep, when to get up, we stopped listening to our feelings and began to obey hours.
The process of adapting to new intellectual technologies is reflected in the changing metaphors that we use to explain ourselves to ourselves. When the mechanical clock appeared, people began to think of their mental abilities as the work of a “clock mechanism”. Today, in the era of software, we thought about them as the work of a “computer”. But changes, as neurology tells us, go much deeper than metaphor. Due to the plasticity of our brain, adaptation also takes place at the biological level.
The Internet promises to have a particularly far reaching impact on cognition. In papers published in 1936, the British mathematician Alan Turing proved that a computer, which at that time existed only as a theoretical machine, can be programmed to perform the function of any other information processing device. And this is what we see today. The Internet, a disproportionately powerful computing system, incorporates most of our other intelligent technologies. He becomes our map and our clock, our printing machine and our typewriter, our calculator and our telephone, our radio and television.
When the Network absorbs the media, this tool is recreated on the Web. The network stuffs the medium with hyperlinks, pop-up ads and other digital tinsel, and it surrounds the new content with content from other media that it has absorbed. New e-mails, for example, may announce their arrival, while we read the latest headlines on the newspaper’s website. The result is that our attention is scattered and our concentration crumbles.
The influence of the web does not end at the edges of the computer screen either. As people's minds become attuned to the crazy quilt of the Internet media, traditional media must adapt to the new expectations of the audience. Television programs add ticker and pop-up ads, magazines and newspapers shorten their articles, introduce a short description, and fill their pages with easily accessible information. When in March of this year, The New York Times decided to devote the second and third pages of each issue to a brief retelling of the articles, design director Tom Bodkin, explained that the “labels” give exhausted readers a quick “excursion” to the news of the day, saving them from less effective "method, which is to actually turn the pages and read articles. Older media have little choice, but they try to play by the rules of new media.
The communication system has never played so many roles in our lives — or has shown such a wide influence on our thoughts — as the Internet does today. Among all that has been written about the web, the least has been considered how it reprograms us. The Network's Ethical Ethics remains unclear.
At the same time that Nietzsche began using his typewriter, a serious young man named Frederick Winslow Taylor brought a stopwatch to a steel mill in Midvale, Philadelphia, and began a historical series of experiments aimed at improving the efficiency of the factory’s machinists.
With the approval of the plant owners, he hired a group of low-skilled workers and forced them to influence various metal processing machines, registering and calculating their every movement, as well as the operations of the machines. By breaking down each job into a sequence of small, discrete steps and then, testing the different ways of performing each, Taylor created a series of precise instructions — as we could say today for “algorithms” —how each worker should work. Midvale employees grumbled about a strict new regime, claiming that it turned them into automatic, but the factory's performance took off.
More than a hundred years after the invention of the steam engine, the industrial revolution finally found its philosophy and its philosopher. Taylor's intense industrial choreography — his “system,” as he liked to call her — was picked up by industrialists around the world.
In search of maximum speed, maximum efficiency and productivity, factory owners used timekeeping to organize the work of factories and create jobs for workers. The goal, as Taylor defined in his famous 1911 treatise, The Principles of Scientific Management, was to identify and adopt “one best method” of work for each work and thus produce “gradual replacement of practice with science in all areas of work” . As soon as his system was applied to all manual labor activities, Taylor assured his followers that this would cause a restructuring of not only industry, but also society, creating a utopia of excellent efficiency. “In the past, man was the first,” he announced; - In the future, the system should be the first. "
The Taylor system is still with us; it remains the ethic of industrial production. And now, thanks to the growing power that computer engineers and software developers have in our intellectual lives, Taylor's ethics is also beginning to rule the realm of thinking.
The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information; its legions of programmers are determined to discover “one best method” —a fine algorithm — to perform each mental movement, what we usually describe as “knowledge work”.Google Headquarters, in Mountain View, California, USA - The Googleplex is the high church of the Internet, and the religion practiced within its walls is Taylorism. Google, as its leader, Eric Schmidt, says, is "a company based around the science of measurement," and therefore seeks to "systematize everything." Using terabytes of behavioral data that she collects through her search engine and through other sites, she performs thousands of experiments a day, according to the Harvard Business Review, and uses the results to generate algorithms that more and more control how people find information and what opinion they derive from this. What Taylor did for manual work, Google does for brain work.The company announced that its mission is to “organize global information and make it universally accessible and useful.” She seeks to develop an "excellent search engine," which is defined as something that "understands exactly what you mean and gives you exactly what you want." According to Google, information is a kind of product, a utilitarian resource that can be extracted and processed with industrial efficiency. The more information we can “access” and the faster we can extract its essence, the more productive we become as thinkers.Where is the end of all this? Sergey Brin and Larry Page, gifted young people who founded Google, while receiving doctoral degrees in computer science at Stanford, often talk about their desire to turn their search engine into artificial intelligence, a machine similar to EAL that could be directly connected to our brain .“The ideal search engine is something as smart as people — or even smarter,” a phrase taken from Page's speech a few years ago. “For us, work with search is a way of working on artificial intelligence.” In 2004, in an interview with Newsweek, Bryn stated the following: “Of course, if you had all the world's information directly attached to your brain, or an artificial brain that was cleverer than your brain, you would be richer.” Last year, Page told a convention of scientists that Google "is really trying to build artificial intelligence and does it on a large scale."Such ambitions are natural, even wonderful, for a couple of mathematicians with an extensive stock of cash at their disposal and a small army of programmers in their offices. As a science venture, Google is motivated by the desire to use technology to “solve problems that have never been solved before,” according to Eric Schmidt, and artificial intelligence is one of the hardest problems.However, their easy assumption that we would all be “rich” if our mental abilities were added, or even replaced, by artificial intelligence is alarming. It offers the belief that the data are products of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In the world of Google, the world we enter when we go online, there’s one muddy place. Ambiguity is not a revelation to understand, but a mistake that needs to be resolved. The human brain is just an outdated computer that needs a faster processor and a larger hard drive.The idea that our minds should be managed as high-speed computers is not only built into the work of the Internet, it is also the main business model of the network. The faster we surf the web, the more links we click and the pages we’re viewing, respectively, the more opportunities Google and other companies have to collect information about us and feed us with advertisements. Most commercial Internet owners have financial gains in collecting crumbs of data that we leave behind when we flit from page to page - the more crumbs, the better. The last thing these companies want is to encourage calm reading or slow concentrated thought. It is in their economic interest to drive us crazy.Perhaps we worry in vain. Just as there is a tendency to glorify technological progress, there is a countertrend to expect the worst from each new tool or machine.In Plato's Phaedrus, Socrates lamented the development of writing. He was afraid that because people would rely on the written word instead of the knowledge that they used to carry in their heads, according to one of the characters in the dialogue, they would “stop using their memory and become forgetful.” And because they would be able to "get any amount of information without proper instructions," they will be "thought of as very knowledgeable, while they are mostly very ignorant." They would be filled with "vanity of wisdom instead of real wisdom." Socrates was not wrong - the new technology did often have effects that he feared - but he was too short-sighted. He could not foresee the many ways in which writing and reading served people, in disseminating information,encouraging new ideas and expanding human knowledge (and possibly wisdom).The appearance in the 15th century of the Guttenberg print press began the next round of teeth grinding. The Italian humanist Hironimo Zukarikafiko was worried that the easy availability of books would lead to intellectual laziness, making men "less diligent" and weaken their minds. Others argued that cheaply printed books and posters would undermine religious power, humiliate the work of scholars and scribes, spread rebelliousness and licentiousness. As Clay Shirky, a professor at the University of New York, notes, “most of the arguments against the printed press were correct, even the gift of foresight.” But the fatalists were unable to imagine the innumerable benefits that the printed word brought.So, dear reader, you have every right to doubt the skepticism of the author. Perhaps the rightness of those who dismiss the critics of the Internet as Luddites or conservatives, from our hyperactive, data-loaded minds, will be proved, the Golden Age of intellectual discoveries and universal wisdom will come. On the other hand, the Web is not an alphabet, and although it can replace a printing press, its product is different. The kind of deep reading that advances the sequence of printed pages is valuable not only for the knowledge that we acquire from the words of the author, but also for the intellectual fluctuations that these words produce when they enter our brain. A person, intently reading a book, or conducting any other act of contemplation, creates his own associations, giving his own conclusions and analogies, contributes to his own ideas. Deep readingas Mariann Wolff states, is indistinguishable from deep reflection.If we lose these abilities, or fill them with “content,” we will sacrifice something important not only for us, but also for our culture. In a recent essay, playwright Richard Foreman eloquently described what is at stake:“I was brought up in the traditions of Western culture, in which the ideal (my ideal) was a complex, dense and“ cathedral-like ”structure of a highly educated and articulate individuality - a man or a woman, which carried a personally constructed and unique version of the entire heritage of the West. I see in all of us a replacement for complex internal density with a new type of self-development under pressure from information overload and “instantly available” technologies. ”As we were dragged out of our “inner set of dense cultural inheritance,” Foreman finishes, we run the risk of becoming “pancake people,” broad and thin, as we connect to that vast network of information that we access with a simple click of a button.I often admire that Kubrick scene. What makes it so acute and so supernatural is the computer’s emotional response to disassembling his brain: his despair, when one circuit after another goes out, his sincere supplication to an astronaut - “I feel it. I can feel it. I'm afraid ”- and his final return to what can only be called a state of innocence. The outpouring of EAL senses contrasts with the insensitivity that characterizes the human figures in the film, who do their job with almost automated effectiveness. It seems that their thoughts and actions are pre-registered, as if they follow the steps of the algorithm. In the world of this film, people have become so much like cars that the man himself turns out to be a machine. This is the essence of Kubrick's dark prophecy: since we,In order to achieve our understanding of the world, we rely on computers; it is our own brain that becomes artificial intelligence.Translation of a complex article, besides the first in my life. Please pardon.A source