📜 ⬆️ ⬇️

Computer history: discovering interactivity




The very first electronic computers were unique devices created for research purposes. But after they appeared on the market, organizations quickly incorporated them into the existing data processing culture - in which all data and processes were presented as stacks of punched cards .

Herman Hollerith developed the first tabulator, capable of reading and counting data based on holes in paper cards, for a US census at the end of the 19th century. By the middle of the next century, a very colorful zoo of the descendants of this machine had penetrated large enterprises and government organizations around the world. Their common language was a card consisting of several columns, where each column (usually) represented one digit, which could be holed in one of ten positions, representing numbers from 0 to 9.

To punch input data into the cards of complex devices was not required, and this process could be distributed across several offices in the organization that generated this data. When the data needed to be processed — for example, to calculate the revenue for the quarterly sales department report — the corresponding cards could be brought to the data center and queued for processing by suitable machines that issued a set of output data on the cards or printed it on paper. Around the central processing machines — tabulators and calculators — were peripheral devices punching, copying, sorting, and interpreting maps.
')

Tabulator IBM 285, a popular device for working with punch cards in the 1930s and 40s.

By the second half of the 1950s, almost all computers worked in such a “batch processing” scheme. From the point of view of a typical end user from the sales department, little has changed. You brought a stack of punched cards for processing and received a printout or other stack of punched cards as a result of your work. And in the process, the cards turned from holes in paper into electronic signals and vice versa, but you didn’t care much about it. IBM dominated the field of card processing machines and remained one of the dominant forces in the field of electronic computers, for the most part because of the well-established connections and a wide range of peripheral equipment. They simply replaced the mechanical tabs and calculators with faster and more flexible data processing machines.


IBM 704 punch card processing kit. In the foreground, the girl is working with a device for reading.

This punch card processing system worked fine for decades and did not subside - quite the contrary. And, nevertheless, in the late 1950s, the marginal subculture of computer researchers began to assert that the whole workflow had to be changed - they claimed that the computer was best used interactively. Instead of leaving him the task and then coming for the results, the user should communicate directly with the machine and use its capabilities upon request. In Capital, Marx described how industrial machines — which people simply run — replaced tools that people controlled directly. However, computers began to exist in the form of machines. And only later some of their users converted them into tools.

And this alteration did not take place in data centers, such as the US Census Bureau, MetLife Insurance Company or United States Steel Corporation (all of these companies were among the first to buy UNIVAC, one of the first commercially available computers). It is unlikely that an organization in which the weekly salary is considered the most efficient and reliable way will want someone to break this treatment while playing with a computer. The value of being able to sit at the console and simply try out one or the other on a computer was more clear to scientists and engineers who wanted to study a problem, get to it from various angles until its weak point is detected, and quickly switch between thinking and acting.

Therefore, such ideas originated from researchers. However, the money to pay for such wasteful use of the computer did not come from the heads of their departments. A new subculture (one might even say a cult) of interactive computer work was born out of a productive partnership between the military and elite universities in the United States. This mutually beneficial cooperation began during the Second World War. Atomic weapons, radars and other magical weapons have taught the military leadership that seemingly incomprehensible studies of scientists can be of incredible importance to the military. This convenient interaction existed for about a generation, and then collapsed into the political twists and turns of another war in Vietnam. But at that time, American scientists had access to huge sums of money, almost no one touched them, and they could do almost everything that even remotely could be associated with national defense.

Justifying interactive computers began with a bomb.

Whirlwind and SAGE


On August 29, 1949, the Soviet research team successfully conducted the first nuclear weapon test at the Semipalatinsk test site . Three days later, a US reconnaissance plane during a flight over the northern Pacific Ocean found traces of radioactive material in the atmosphere left over from this test. The USSR had a bomb, and their American rivals found out about it. The tense situation between the two superpowers has persisted for more than a year, since the USSR cut off land routes to the West-controlled areas of Berlin in response to plans to restore Germany’s former economic greatness.

The blockade ended in the spring of 1949, hitting a stalemate because of a massive operation undertaken by the West to support the city from the air. Tension somewhat subsided. However, the American generals could not ignore the existence of a potentially hostile force that had access to nuclear weapons, especially given the ever-increasing size and range of flight of strategic bombers. The United States had a chain of radar aircraft detection stations established on the shores of the Atlantic and the Pacific during World War II. However, they used outdated technology, did not cover the northern approaches through Canada, and were not linked by a central system to coordinate air protection.

To remedy the situation, the Air Force (an independent US military unit since 1947) convened an Air Defense Engineering Committee (ADSEC). In history, he is remembered as the “committee of the Valley,” after the name of the chairman, George Valley. He was a MIT physicist, a veteran of the military research radar group Rad Lab, after the war turned into an electronics research laboratory (RLE). The committee studied this problem a year, and the final report of the Valley was released in October 1950.

One would assume that such a report would turn out to be a boring bureaucratic mess, and would end with a carefully pronounced and conservative proposal. Instead, the report turned out to be a most interesting example of creative argumentation, and contained a radical and risky plan of action. This is obvious merit of another MIT professor, Norbert Wiener , who claimed that the study of living beings and machines can be combined into a single discipline of cybernetics . Valli and his co-authors began with the assumption that the air defense system is a living organism, and not metaphorically, but in fact. Radar stations serve as senses, interceptors and missiles - these are the effectors with which it interacts with the world. They work under the control of a director who uses information from the senses to make decisions about necessary actions. They further argued that the director, consisting solely of people, would not be able to stop hundreds of approaching airplanes on millions of square kilometers in a few minutes, so as many director functions as possible should be automated.

The most unusual of their conclusions is that it would be best to automate the director through digital electronic computers that can take part of human decisions: analysis of incoming threats, sending weapons against these threats (counting interception rates and passing them on to fighters), and perhaps even developing an optimal response strategy. Then it was not at all obvious that computers were suitable for such a purpose. In all of the United States at that time there were exactly three working electronic computers, and none of them even closely matched the reliability requirements for the military system, on which millions of lives depend. They were just very fast and programmable number handlers.

However, Valli had reason to believe in the possibility of creating a real-time digital computer, since he knew about the Whirlwind project. It began during the war in the laboratory of MIT servo-mechanisms under the guidance of a young graduate student, Jay Forrester. His initial goal was to create a general-purpose flight simulator that could be reconfigured to support new aircraft models, without rebuilding each time from scratch. A colleague convinced Forrester that his simulator should use digital electronics to process the input parameters from the pilot and output the output states for the instruments. Gradually, the attempt to create a high-speed digital computer has outgrown and eclipsed the original goal. The flight simulator was forgotten, and the war that gave rise to its development ended a long time ago, and the Inspection Committee of the Naval Research Administration (ONR) gradually became disillusioned with the project due to the ever-growing budget and the constantly ending date. In 1950, ONR critically cut Forrester’s budget for the next year, intending to completely cover the project after that.

However, for George Valley, Whirlwind has become a revelation. The real computer Whirlwind was still far from working condition. However, after that, a computer was to appear, which is not just a mind without a body. This is a computer with senses and effectors. An organism. Forrester had already considered plans for expanding the project to the main system of the country's military command and control center. To experts on computers from ONR, who considered computers suitable only for solving mathematical problems, this approach seemed grandiose and absurd. However, it was precisely such an idea that Valli was looking for, and he appeared just in time to save Whirlwind from non-existence.

Despite big ambitions (and perhaps thanks to them), the Valli report convinced the Air Force command, and they launched an extensive new research and development program to first understand how to create an air defense system based on digital computers and then actually build it. The Air Force began to work with MIT to conduct basic research - it was a natural choice, given the presence of Whirlwind and RLE at the Institute, as well as the history of successful cooperation in the field of air defense, since the days of Rad Lab and World War II. They called the new project "Lincoln Project", and built a new Lincoln research lab in Hansky Field, 25 km north-west of Cambridge.

The Air Force called the SAGE computerized air defense project, a typical strange abbreviation of the military project, meaning "semi-automatic ground environment." Whirlwind was supposed to be a test computer, proving the viability of the concept before entering the full-scale production of equipment and its implementation - this responsibility was placed on IBM. The working version of the computer Whirlwind, which should have been made at IBM, has received a much less memorable name AN ​​/ FSQ-7 ("military-naval fixed special equipment" - compared to this acronym SAGE looks pretty accurate).

By the time the Air Force made complete plans for the SAGE system in 1954, it consisted of various radar installations, air bases, air defense weapons — all of which were controlled from twenty-three control centers, massive bunkers designed to withstand the bombardment. To fill these centers IBM would have to have forty-six computers, and not twenty-three, which would cost the military many billions of dollars. All because the company has so far used electronic tubes in logic circuits, and they have burned out like incandescent bulbs. Any of tens of thousands of lamps in a running computer could refuse at any time. Obviously, it would be unacceptable to leave a whole sector of the country's airspace unprotected while the technicians are carrying out repairs, so you had to keep a spare car on hand.


SAGE control center at Grand Forks Air Force Base in North Dakota with two AN / FSQ-7 computers

In each control center, there were dozens of operators working in front of the electron-beam screens, each of whom tracked part of the airspace sector.



The computer tracked any potential air threats and drew them as traces on the screen. The operator could use a light gun to display additional information on the trail and issue commands for the protection system, and the computer turned them into a printed message for an available rocket battery or air force base.



Interactivity virus


Given the nature of the SAGE system — direct interaction between human operators and a digital computer with a CRT in real time, with the help of light guns and a console — it is not surprising that the first cohort of advocates for interactive interaction with computers was brought up in the Lincoln laboratory. The entire laboratory computer culture existed in an isolated bubble, being cut off from the norms of batch processing that developed in the commercial world. Researchers used Whirlwind and its descendants, reserving the length of time for which they received exclusive access to a computer. They are used to using hands, eyes and ears for direct interaction through switches, keyboards, brightly lit screens, and even a speaker, without paper intermediaries.

This strange and small subculture has spread to the outside world like a virus, through direct physical contact. And if we consider it a virus, then a young man named Wesley Clark should be called the zero patient. Clark left graduate school in physics at Berkeley in 1949 to become a technician at a nuclear weapon factory. However, the work he did not like. After reading several articles from computer magazines, he began to look for an opportunity to penetrate what seemed like a new and interesting area, full of undiscovered potential. He learned about the recruitment of computer specialists to the Lincoln laboratory from an advertisement, and in 1951 he moved to the east coast to work at the start of Forrester, who had already become head of the laboratory of digital computers.


Wesley Clark showing off his LINC biomedical computer, 1962

Clark joined the advanced development team, a laboratory unit that personified the relaxed state of military and university cooperation at the time. Although technically the unit was part of the Lincoln laboratory universe, this team existed in a bubble inside another bubble, was isolated from the daily needs of the SAGE project, and was free to choose any computer direction that could be at least somehow tied to air defense. Their main task in the early 1950s was to create a Memory Test Computer (MTC), designed to demonstrate the viability of a new, highly efficient and reliable method of storing digital information, magnetic core memory , which would replace the capricious CRT-based memory used in Whirlwind.

Since MTC had no users other than its creators, Clark had full access to the computer for many hours every day. Clark became interested in a fashionable then cybernetic mixture of physics, physiology and information theory, thanks to his colleague Belmont Farley, who spoke with a group of biophysicists from RLE in Cambridge. Clark and Farley spent long hours at MTC, creating software models of neural networks to study the properties of self-organizing systems. From these experiments, Clark began to extract certain axiomatic principles of computing, from which he never deviated. In particular, he began to believe that “user friendliness is the most important design factor”.

In 1955, Clark teamed up with Ken Olsen, one of the MTC developers, to create a new computer plan that could pave the way for the next generation of military control systems. Using a very large memory on magnetic cores for storage, and transistors for working with logic, it could be made much more compact, reliable and powerful than Whirlwind. Initially, they proposed a design called TX-1 (Transistorized and eXperimental computer, “experimental transistor computer” - much more clear than AN / FSQ-7). However, the Lincoln laboratory management rejected the project as being too expensive and risky. Transistors appeared on the market just a few years before, and very few computers were created on transistor logic. Therefore, Clark and Olsen returned with a smaller version of the machine, the TX-0, which was approved.


Tx-0

The functionality of the TX-0 computer as a tool for managing military bases was a pretext for its creation, but she was much less interested in Clark than the opportunity to promote her computer design ideas. From his point of view, the interactivity of computers has ceased to be a fact of life in the laboratories of Lincoln and has become a new norm - the right way to create and use computers, especially for scientific work. He gave access to TX-0 biophysicists from MIT, although their work had nothing to do with air defense, and allowed them to use the machine's visual display to analyze electroencephalograms from sleep studies. And no one objected to this.

The TX-0 was successful enough for the Lincoln Lab in 1956 to approve a full-scale TX-2 transistor computer with a massive 2 million-bit memory. The project will take two years to complete. After that, the virus will break out of the lab. Upon completion of the TX-2 laboratories, the need to use an early prototype will disappear, so they agreed to transfer the TX-0 to Cambridge to rent an RLE. It was installed on the second floor, above the computer center for batch processing. And he immediately infected computers and professors from the MIT campus, who began to fight for time periods in which they could gain complete control over the computer.

It was already clear that it was almost impossible to write a computer program right the first time. Moreover, the researchers who studied the new task often didn’t understand at first what the correct behavior should be. And to get results from the computer center, you had to wait for hours, or even until the next day. For dozens of newly-minted programmers from campus, the opportunity to climb the ladder, detect an error and immediately fix it, try a new approach and immediately see improved results, became a real revelation. Some used their time on the TX-0 to work on serious science or engineering projects, but the joy of interactivity attracted more playful souls. One student wrote a text editing program, which he called the “expensive typewriter”. Another followed his example and wrote an “expensive desktop calculator,” which he used to do his homework on numerical analysis.


Ivan Sutherland Shows His Sketchpad Program on TX-2

Meanwhile, Ken Olsen and another TX-0 engineer, Harlan Anderson, annoyed by the slow progress of the TX-2 project, decided to launch a small-scale interactive computer for scientists and engineers. They left the lab to establish Digital Equipment Corporation, set up an office in a former textile factory on the Assabeth River, ten miles west of Lincoln. Their first PDP-1 computer (released in 1961) was essentially a clone of the TX-0.

TX-0 and Digital Equipment Corporation began to spread joyful news about a new way of using computers outside the Lincoln lab. Yet, so far, the interactivity virus has been geographically localized in eastern Massachusetts. But that too soon had to change.

What else to read:


Source: https://habr.com/ru/post/452030/


All Articles