📜 ⬆️ ⬇️

Internet History: ARPANET - Subnet




With the help of ARPANET, Robert Taylor and Larry Roberts were going to unite many different research institutes, each of which had its own computer, for which software and hardware it was fully responsible. However, the software and hardware of the network itself was located in a foggy middle region, and did not belong to any of these places. In the period from 1967 to 1968, Roberts, head of the network project of the Information Processing Technology Office (IPTO), was to determine who should build and maintain the network, and where the boundaries between the network and institutions should lie.

Skeptics


The problem of network structuring was at least as political as technical. ARPA research supervisors generally did not approve of the idea of ​​ARPANET. Some have clearly demonstrated any lack of desire to join the network at any time; few of them were enthusiastic. Each center would have to make serious efforts to allow others to use their very expensive and very rare computer. Such provision of access demonstrated obvious shortcomings (loss of a valuable resource), while its potential advantages remained vague and vague.

The same skepticism about sharing resources was flooded by the University of California Los Angeles network project several years ago. However, in this case, ARPA had much more leverage, as it directly paid for all these valuable computer resources, and continued to keep its hand on all the cash flows of related research programs. And although no direct threats were made, no “not that” was voiced, the situation was extremely clear - one way or another, but ARPA was going to build its network to combine the machines that, in practice, still belonged to him.

The moment has ripened at a meeting of scientific leaders at Ett Arbor in Michigan, in the spring of 1967. Roberts presented his plan for creating a network connecting a variety of computers in each of the centers. He announced that each leader would provide his local computer with special network software that he would use to call other computers over the telephone network (this was before Roberts knew about the idea of packet switching ). The answer was controversy and fears. Among the least inclined to implement this idea were the largest centers in which large projects sponsored by IPTO have already worked, among which the main one was MIT. Researchers from MIT, bathed in the money received to develop the Project MAC time-sharing system and the artificial intelligence laboratory, did not see any advantages for themselves in sharing their well-deserved resources with any riffraff from the west.

And, regardless of their status, each center cherished its own ideas. Each had their own unique programs and equipment, and it was difficult to understand how they could manage to at least establish the simplest connection with each other, not to mention real collaboration. Just writing and running network programs for their machine will consume a considerable amount of time and computing resources from them.

Ironic, but surprisingly appropriate was the fact that the solution to these social and technical problems, adopted by Roberts, came from Wes Clark, a man who disliked both time sharing and networks. Clark, a supporter of the quixotic idea of ​​giving out a personal computer to every person, was completely not going to share computer resources with anyone, and kept his own campus, the University of Washington in St. Louis, far from the ARPANET network for many more years. Therefore, it is not surprising that it was he who developed the network project, which does not add a significant load to the computing resources of each of the centers, and does not require each of them to spend energy on creating special software.

Clark suggested placing in each of the centers a mini-computer that processes all functions directly related to the network. Each center only had to figure out how to connect to its local assistant (which was later called interface message processors, or IMP ), which then sent the message along the desired route so that it reached the corresponding IMP at the receiving location. In essence, he suggested that ARPA distribute additional free computers to each center that will take over most of the network resources. At a time when computers were still rare and very expensive, this proposal was impudent. However, just then mini-computers began to appear, costing only a few tens of thousands of dollars, instead of several hundred, and as a result, the proposal turned out to be feasible in principle (in the end, each IMP cost $ 45,000, or about $ 314,000 in today's money).

The IMP approach, which alleviated the concerns of academic leaders about the network load on their computing power, also addressed another political issue, ARPA. Unlike other agency projects of the time, the network was not limited to a single research center where it would be led by a single head. And ARPA itself did not have the ability to directly create and manage a large-scale technical project. She would have to hire third-party companies for this. The presence of IMP made a clear delineation of responsibility between a network controlled by an external agent and a computer with local control. The contractor would control the IMPs and everything in between, and the centers would remain responsible for the hardware and software on their own computers.

IMP


After that, Roberts needed to choose this contractor. Liklider’s old-fashioned approach of enticing offers from a beloved researcher directly did not fit in this case. The project was required to be put up for public auction, like any other government contract.

Only by July 1968, Roberts was able to settle the final details of the bid. About six months have passed since the last technical piece of the puzzle fell into place, when at a conference in Gatlinburg they talked about a packet switching system. The two largest computer makers, Control Data Corporation (CDC) and International Business Machines (IBM), immediately refused to participate because they did not have inexpensive mini-computers suitable for the role of IMP.


Honeywell DDP-516

Among the remaining participants, the majority chose Honeywell's new DDP-516 , although some tended to favor the Digital PDP-8 . Honeywell's option was particularly attractive because it had an I / O interface specifically designed to work with real-time systems for applications such as industrial unit control. For communications, of course, appropriate accuracy was also required - if a computer missed an incoming message while being busy with other work, there was no second chance to catch it.

Toward the end of the year, seriously reflecting on Raytheon's candidacy, Roberts entrusted this task to a growing Cambridge firm founded by Bolt, Beranek, and Newman. The family tree of interactive computing at that time was extremely ingrown, and Roberts could be blamed for nepotism for choosing BBN. Liklider brought interactive computing to BBN before becoming the first director of IPTO, sow the seeds of his intergalactic network, and educate people like Roberts. Without Lika’s influence, ARPA and BBN would not be interested or able to service the ARPANET project. Moreover, a key part of the team assembled by BBN to create an IMP-based network came directly or indirectly from Lincoln's laboratories: Frank Hart (team leader), Dave Walden, Will Crowther and North Ornstein. It was in the laboratories that Roberts himself was in graduate school, and it was there that Lick's accidental collision with Wes Clark generated his interest in interactive computers.

But, although this situation might have looked like a conspiracy, in fact the BBN team was just as well adapted for working in real time as the Honeywell 516. In Lincoln, they worked on computers connected to radar systems - this is another example of an application in which data will not wait until the computer is ready. Hart, for example, worked on the Whirlwind computer as a student in the 1950s, joined the SAGE project, and spent a total of 15 years in the Lincoln labs. Ornstein worked on the SAGE cross-protocol, which transmitted radar tracking data from one computer to another, and later on Wes Clark’s LINC, a computer designed to help scientists in the lab directly by working with data online. Crowther, now best known as the author of the text-based game Colossal Cave Adventure , spent ten years creating real-time systems, including the Lincoln experimental terminal, a mobile satellite communications station with a small computer that controlled the antenna and processed incoming signals.


IMP team at BBN. Frank Hart is a middle-aged man. Ornstein is standing on the right side, next to Crowther.

IMP was responsible for understanding and managing routing and delivering messages from one computer to another. The computer could send up to 8000 bytes at a time to the local IMP, along with the recipient address. The IMP then sliced ​​the message into smaller packets that were independently transmitted to the target IMP along lines supporting the 50 kbps leased from AT&T. The receiving IMP collected the message in pieces and delivered it to his computer. Each IMP kept a table where it was tracked which of its neighbors had the fastest route to achieve any possible goal. It was dynamically updated based on information received from these neighbors, including information that the neighbor was unavailable (in which case the delay for sending in this direction was considered infinite). To meet the speed and throughput requirements put forward by Roberts for all these processing processes, the Hart team created code at the level of a work of art. The entire processing program for IMP took up only 12,000 bytes; that part which was engaged in routing tables occupied only 300.

The team also took several precautions, given that it was impractical to assign a support team to each IMP in the field.

First, they equipped each computer with devices for remote monitoring and control. In addition to the automatic restart that started after each power outage, the IMPs were programmed to be able to restart the neighbors by sending them new versions of the operating software. To help with debugging and analysis, the IMP could, on command, start casting its current state at regular intervals. Also, IMP attached a part for each package to track it, which made it possible to write more detailed work logs. With all these possibilities, many problems could be solved directly from the BBN office, which served as the control center, from which one could see the status of the entire network.

Secondly, they asked Honeywell for a military version of the 516 computer, equipped with a thick case that protected it from vibration and other threats. BBN basically wanted to make it a “stay away” sign for curious graduate students, but at the same time, nothing could more clearly delineate the border between local computers and the BBN-controlled subnet like this armored corps.

The first reinforced cabinets about the size of a refrigerator arrived at the University of California, Los Angeles (UCLA) on August 30, 1969, just 8 months after BBN received its contract.

Hosts


Roberts decided to start the network with four hosts - besides UCLA, IMPs will be installed near the coast at the University of California, Santa Barbara (UCSB), another at Stanford Research Institute (SRI) in northern California, and the last at the University of Utah. All these were second-rate institutions from the West Coast, trying to somehow prove themselves in the field of scientific computing. Family ties continued to work as two of the supervisors, Len Kleinrock of UCLA and Ivan Sutherland of the University of Utah, were also Roberts' old lab colleagues at Lincoln's labs.

Roberts gave the two hosts additional network-related features. Dag Englebart from SRI back in 1967 at a meeting of leaders volunteered to raise a network information center. Using a sophisticated information retrieval system in SRI, he was going to create an ARPANET telephone directory: an ordered collection of information on all resources available on various nodes, and give access to it to all network participants. Based on Kleinrock’s experience in network traffic analysis, Roberts has designated UCLA as the Network Activity Measurement Center (NMC). For Kleinrock and UCLA, ARPANET was to become not only a practical tool, but also an experiment, the data from which could be extracted and summarized in order to apply the knowledge gained to improve the design of the network and its followers.

But more important for the development of ARPANET, than these two appointments, has become a more informal and vague community of graduate students called the "Network Working Group" (NWG). The subnet from IMP allowed any host on the network to reliably deliver a message to any other; NWG's goal was to develop a common language or set of languages ​​that hosts could use to communicate. They called them "host protocols." The name "protocol", borrowed from diplomats, was first applied to networks in 1965 by Roberts and Tom Marill to describe both the data format and the algorithmic steps that determine how two computers communicate with each other.

The NWG, under the informal but factual direction of Steve Crocker of UCLA, began to meet regularly in the spring of 1969, some six months before the first IMP. Crocker was born and raised in the Los Angeles area, attended Van Nice School, being the same age as his two future NWG colleagues, Wint Cerf and John Postel. To record the results of some of the group’s meetings, Crocker developed one of the cornerstones of ARPANET (and the future Internet) culture, request for comments [ RFC ]. His RFC 1, published April 7, 1969, and distributed to all future ARPANET nodes via classic mail, gathered the group's early discussions about designing software for the host protocol. In RFC 3, Crocker went on to describe the process of designing all future RFCs very vaguely:

Comments are better to send on time than to perfect. Philosophical opinions are accepted without examples or other specifics, certain proposals or implementation technologies without an introductory description or contextual explanations, specific questions without trying to answer them. The minimum length for a note from NWG is one sentence. We hope to facilitate the exchange of views and discussions on informal ideas.

Like request for quotation (RFQ), the standard method of requesting bids for government contracts, the RFC welcomed any reaction, but, unlike RFQ, it also invited to dialogue. Each of the distributed NWG community could submit an RFC, and use this opportunity to debate, ask a question or criticize a previous sentence. Of course, as in any community, some opinions were put above others and in the early days the opinion of Crocker and his main group of associates enjoyed great authority. In July 1971, Crocker left UCLA, while still a graduate student, to become a program manager at IPTO. Having at his disposal key research grants from ARPA, he, voluntarily or involuntarily, had an undeniable influence.


John Postel, Steve Crocker and Vint Cerf are classmates and NWG colleagues; later years

The original NWG plan involved the introduction of two protocols. Remote login (telnet) allowed one computer to work as a terminal connected to the operating system of another, spreading the interactive environment of any system included in ARPANET with time sharing of thousands of kilometers, to any network user. The FTP file transfer protocol allowed one computer to transfer a file, for example, a useful program or data set, to or from another system’s storage. However, at the insistence of Roberts, the NWG added a third basic protocol to the two, establishing a basic connection between the two hosts. It has been called a network management program (NCP). Now the network had three levels of abstraction - the packet subnet controlled by IMP at the very bottom, the host communication provided by NCP in the middle, and the application protocols (FTP and telnet) at the top.

Failure?


Only by August 1971, NCP was fully defined and implemented throughout the network, which at that time consisted of fifteen nodes. Implementations of the telnet protocol soon followed, and the first stable definition of FTP appeared a year later, in the summer of 1972. If you evaluate the state of ARPANET for that period, a few years after it was first launched, it could be considered a failure compared to the dream of sharing the resources that Liklider imagined and put into practice his protégé, Robert Taylor.

For starters, it was just hard to figure out what resources exist on the network that you can use. The network information center used a model of voluntary participation - each node itself had to supply updated information on the availability of data and programs. And although everyone would benefit from such actions, each individual site did not have strong motivation to advertise its resources and provide access to them, not to mention providing relevant documentation or advice. Therefore, the NIC failed to become a network directory. Probably its most important function in the early years was to electronically host a growing set of RFCs.

Even if, say, Alice from UCLA knew about the availability of a useful resource in MIT, a more serious obstacle appeared. Telnet allowed Alice to the login screen in MIT, but no further. In order for Alice to really get access to some program at MIT, she first had to make an agreement with MIT offline so that they opened her an account on their computer, which usually required filling out paper forms at both institutes and a financing agreement to pay for use of computer resources MIT. And because of the incompatibility between the hardware and the system software between the nodes, file transfer often did not make much sense, since you could not execute programs from remote computers on your own.

Ironically, the most significant success of resource sharing did not lie in the area of ​​interactive time sharing for which ARPANET was created, but in the field of old-fashioned non-interactive data processing. UCLA added its idle IBM 360/91 machine for batch processing of data to the network, and provided telephone consultations to support remote users, which brought significant revenue to the computer center. ARPA-sponsored ILLIAC IV from the University of Illinois and Datacomputer at Computer Corporation of America in Cambridge also found remote customers through ARPANET.

But all these projects did not come close to the full use of the network. In the fall of 1971, having 15 hosts online, the network as a whole transmitted through each node an average of 45 million bits, or 520 bits / s, from the lines leased from AT&T with a throughput of 50,000 bits / s. Moreover, most of this traffic was test traffic and was generated by the network measurement center from UCLA. In addition to the enthusiasm of some of the first users (for example, Steve Cara, who used the PDP-10 every day at the University of Utah from Palo Alto), little happened in ARPANET. From the modern point of view, perhaps the most interesting event was the launch of the Guttenberg Project digital library in December 1971, organized by Michael Hart, a student at the University of Illinois.

But soon ARPANET saved the third application protocol from a charge of decay - a little thing called email.

What else to read


• Janet Abbate, Inventing the Internet (1999)
• Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late: The Origins of the Internet (1996)

Source: https://habr.com/ru/post/461177/


All Articles