3 Revisiting the Origins: The Internet and its Early Governance

, 2019; online edn, Oxford Academic , 17 Apr. 2019 ), https://doi.org/10.1093/oso/9780198833079.003.0003, accessed 18 Sept. 2024.

Navbar Search Filter Mobile Enter search term Search Navbar Search Filter Enter search term Search

Abstract

This chapter sets the stage for the long-term analysis of the evolution of the field. In the early days the Internet was a rather homogenous domain, closely linked to computer science and networking experiments. The rules designed for its management were function- and efficiency-driven. Starting in 1983, different forms of governance, combining public and private initiatives, begin to profile, largely around an active community of professionals in the ARPA network. Until the expansion and commercialization of the Internet in the mid-1990s, the predominant governance route was that of standards and protocols making networks interoperable. In a path-dependent trajectory, Internet services remained exempted from regulation.

Collection: Oxford Scholarship Online

The universe of services, business models, and innovations built on the Internet was—and continues to be—made possible by the technical architecture of the network, as well as the political commitment to its development. Both of these are tightly linked to the early history of the Internet, which is explored in this chapter. The birth of the Internet was the result of a series of relatively informal interactions, as part of an academic effort mainly driven by computer scientists contracted to work for the US Advanced Research Projects Agency (ARPA) 1 in both technical and leadership positions. The early days of the Internet encapsulate much more than prima facie efforts to create a physical network of computers able to communicate with each other. They also elucidate the origins of governance activities in this field. Various functions, performed by different coordination bodies, amounted to direct or indirect decision-making with global implications, right from the start. All of these pre-date the very concept of ‘Internet governance’ (Abbate 1999), and are key to understanding how this field of inquiry emerged.

Contrary to how this may be portrayed nowadays, how the Internet came about is not without controversy. As Bing notes, despite its recent birth, the history of the Internet is ‘shrouded in myths and anecdotes’ (2009, 8) and partisan accounts have become widespread. Goldsmith and Wu talk about the Internet pioneers ‘in effect building strains of American liberalism, even a 1960s idealism, into the Universal language of the Internet’ (2006, 23). McCarthy refers to the ‘creation of an Internet biased towards a free flow of information as the product of a culturally specific American context’ (2015, 92). In this chapter, I explore the lineage of the Internet through constructivist lenses. After outlining the heterogeneity of ideas that stood at the basis of creating an interconnected network of computers, the development of problem-solving working groups is explored, followed by an analysis of the political environment that allowed for this network’s expansion. The role of the US government in subsidizing developments and encouraging the privatization of the Internet in the mid-1990s is discussed subsequently. For Abbate, the history of the Internet is ‘a tale of collaboration and conflict among a remarkable variety of players’ (1999, 3), but it is also a tale of informal governance, with key individuals and networks at the forefront, as presented here.

The global network of networks known as the Internet came out of a subsidized project by (D)ARPA and later by the National Science Foundation (NSF), which funded the ‘NSFNET’, the basis for the current backbone of the Internet. Essential Internet Protocols still in use today, including File Transfer and TCP/IP, date all the way back to the ARPANET experiment. Developments like the World Wide Web and the Border Control Gateway make the Internet a global network able to connect different types of systems using Internet Protocol datagrams. From laying the infrastructure to the content of web applications, the Internet has, from the start, been subject to various forms of governance, in addition to being an object of contention internationally and domestically. The latter is further illustrated by the competing projects of the different US agencies, in particular DARPA and the NSF.

To reconstruct the political dimensions of the debates around the creation and design of the Internet, I draw on a multiplicity of sources and historical accounts on both sides of the Atlantic (including scholarly publications, original documents, and personal conversations) in an attempt to provide a full(er) picture of the tensions between the different technological camps and the type of action they structured. In this chapter, I divide the Internet’s early history into two parts: first, I explore the pre-Internet developments that established the structural conditions necessary for a computer networking experiment. Second, I analyse the TCP/IP-related developments, the distinguishing protocol also known as the ‘Internet’ and delineate its different phases, from ARPANET to NSFNET, looking at the early governance practices and formalized arrangements.

Setting the Stage: Pre-Internet Developments

In the 1970s, humankind started to fulfil a long-time aspiration: a global communication network sharing, storing, and sorting the largest amount of information ever amassed. Scientists on both sides of the Atlantic were essential to the development of the features that constitute the modern Internet. Military and political support, extensive funding, light touch management, and long-term vision were all required to make this dream a reality. In 1837, the British mathematician Charles Babbage proposed a mechanical general-purpose computer with integrated memory and conditional branching, laying the foundations for modern computers. The invention, which was program-controlled by punched cards, was called the Analytical Engine and raised great interest in Europe, but not enough funding to ever be completed. Working on this with Babbage, Ada Lovelace published in 1862 the first algorithm for implementation on the engine. To show the full potential of the programming capacities of the machine, the algorithm was designed to compute Bernoulli numbers, but it never got tested during her lifetime.

Among the first to envision a central repository of human knowledge was the British futurist and science fiction author H. G. Wells (1866–1946), but the list of pioneer thinkers is long and spans various disciplines. The American librarian and educator Mervil Dewey (1851–1931) proposed a system of classification that revolutionized and unified the cataloguing of books across the network of US libraries. Still widely deployed around the world, the Dewey system uses a topic-based decimal system with further subdivisions. Card indexing for easily finding references in book storage and, later on, the idea of a ‘universal book’ are credited to the Belgian Paul Otlet (1868–1944), who elaborated on this in his 1934 ‘Traité de documentation: le livre sur le livre, théorie et pratique’. Together with Henri La Fontaine, he created the Universal Bibliographic Repertory in 1895 and later worked with Robert Goldschmidt to create an encyclopaedia printed on microfilm.

Technical developments during the Second World War also played a crucial role in the birth of the Internet. Considered the father of the modern computer, the English mathematician and cryptanalyst Alan Turing developed the first electromechanical machine capable of performing multiple programmable tasks and learning from the stored information, with inspiration from Babbage. Working independently, the German Konrad Zuse developed the first programmable computer (Z3) in 1938. The Universal Turing Machine was launched in 1939 and laid the foundations for the machine called ‘the Bombe’ employed by the British to decipher the encrypted messages of the German intelligence.

With the war over in July 1945, Vannevar Bush, then-director of the US Office of Scientific Research and Development of the Defence Nuclear Research Committee (behind the Manhattan Project), called for a post-war research agenda in information management. After coordinating the work of more than 6,000 American scientists on transferring advancements from science to warfare, Bush pushed for a concerted effort to make the rapidly growing store of knowledge widely and easily accessible. In his 1945 essay ‘As We May Think’ published in the Atlantic Monthly, he elaborates on his idea of a ‘memex’, a document management system very similar to today’s personal computer.

Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and, to coin one at random, ‘memex’ will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory. (Bush 1945)

To address the concerns of a potential nuclear war, US scientists were preoccupied with finding a solution for long-distance telecommunication within the Department of Defense, primarily for linking launch control facilities to the Strategic Air Command. The Russian launch of Sputnik I in 1957 brought new impetus for funding technological research that could better position the United States in space exploration and military command. In 1958, President Eisenhower authorized the creation of two special agencies for space research under the Department of Defense: the National Aeronautics and Space Administration (NASA) and ARPA. ARPA’s original mandate—with an initial budget of $520 million—was ‘to prevent technological surprise like the launch of Sputnik, which signalled that the Soviets had beaten the US into space’, and thus fund universities and research institutions to conduct complex research on science and technology useful for the defence industry, though not always explicitly linked to military applications.

ARPA, Internetworking, and the Military Agenda

ARPA’s focus on space research faded out shortly after its establishment and the agency began working on computer technology. As Stephen J. Lukasik, Deputy Director and Director of DARPA between 1967 and 1974, later explained:

The goal was to exploit new computer technologies to meet the needs of military command and control against nuclear threats, achieve survivable control of US nuclear forces, and improve military tactical and management decision making. (Lukasik 2011)

For the first years in ARPA’s operation, efforts were concentrated on computer-simulated war games. This changed when Joseph C. R. (‘Lick’) Licklider (1915–90) joined ARPA in 1962 to lead its newly established Information Processing Techniques Office (IPTO). Licklider, a Harvard-trained psychologist and computer scientist, published in 1960 his famous paper ‘Man–Computer Symbiosis’, proposing technology that would ‘enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs’. In 1965, Licklider’s ‘Libraries of the Future’ commissioned research introduced the concept of digital libraries as ‘procognitive systems’. Building on Bush’s memex work, Licklider noted:

the concept of a ‘desk’ may have changed from passive to active: a desk may be primarily a display-and-control station in a telecommunication—telecomputation system—and its most vital part may be the cable (‘umbilical cord’) that connects it, via a wall socket, into the procognitive utility net. (Licklider 1965, 33)

Under Licklider’s lead at IPTO, the research focus shifted to time-sharing, computer language, and computer graphics, and cooperation with computer research centres around the United States was prioritized. Licklider referred to this cooperation as the ‘Intergalactic Computer Network’—later shortened to InterNet. For its implementation, he reached out to a private company based in Boston—Bolt, Beranek, and Newman (BBN)—to develop network technology. 2 Within the span of nine months, BBN, under the lead of Frank Heart, built a network of four computers, each operating on a different system and using the Interface Message Processors (IMPs). Licklider knew BBN well, having served as its vice-president in 1957. To a large extent, the digital direction chosen by the BBN was his idea: ‘If BBN is going to be an important company in the future, it must be in computers’ (Beranek 2005, 10). Frank Heart and Licklider were both emeriti alumni of the Lincoln Laboratory. The successors of Licklider at the IPTO were hand-picked from the same academic environment. The first was Ivan Sutherland (1964–66), who ran the IPTO when its budget was approximately $15 million (National Research Council 1999, 100), between 1964 and 1966. The second was Lawrence Roberts, who came from MIT and the Lincoln Laboratory to IPTO between 1964 and 1966. The third was Robert Taylor, who formerly worked at NASA, taking office with IPTO from 1966 until 1968.

Working independently, in the early 1960s, Paul Baran at the RAND Corporation in the United States and Donald Davies at the National Physical Laboratory (NPL) in the United Kingdom developed the message block system that set the basis of modern packet switching and dynamic routing, the foundation of the Internet infrastructure today. Packet switching allowed for breaking a message into smaller blocks of data that Davies called ‘packets’ and for routing them separately (‘switch’) via the network, yet ready to be recomposed by the computer at the receiving end. The significance of this breakthrough was compared to the advent circuit switching system used in the early days of the telephone, which enabled telephone exchanges—with human operators manually connecting calls—to create a single continuous connection between two telephones.

Baran’s three-fold categorization of communications networks—centralized, decentralized, and distributed networks—set the stage for future work. Baran conducted the largest part of this work while employed by RAND from 1959 to roughly 1962. Although his ideas—summarized in eleven reports and supported by mathematical evidence and graphs—were never implemented, they were later picked up by ARPA scientists (Shapiro 1967). A key advancement in computing came from MIT in 1961, when the time-sharing mechanism became operational, allowing several users to share the capacities of a single computer, which were full-room machines at the time. That same year, MIT’s Leonard Kleinrock completed his PhD thesis on packet switching, proposing the transmission of data by dividing messages into smaller ‘chunks’ lined up at the nodes of a communication system based on two principles: demand access and distributed control (Kleinrock 1962). Originally, advanced level work like Kleinrock’s was funded through the division of mathematical sciences, yet as of 1970, the theoretical computer science program was born as the NSF established its Office of Computing Activities. By 1980, the NSF already funded around 400 individual projects in computational theory. Alongside DARPA, it became the main source of funding for computing research during that decade.

ARPANET, its Alternatives and Successors

The first operational packet switching network was ARPANET, a project started with a budget of $1 million at ARPA. A plan to experiment with connecting sixteen sites (ARPANET) across the United States was revealed at the 1967 symposium of the Association for Computing Machinery (ACM) in Tennessee. A year later, ARPA funded its first graduate student conference at the University of Illinois, inviting a few students from each university working on computing research to cross-fertilize ideas. By 1968, to document the work undertaken on the ARPANET, a fast-paced experimentation network, the Network Working Group (NWG) was established under the leadership of Steve Crocker from UCLA. 3 On 7 April 1969, Crocker sent the first Request for Comments (RFC) to the other NWG participants using conventional mail. On 2 September 1969, the BBN Interface Message Processor was connected to UCLA. 4 According to Crocker (2012), the RFC was initially thought of as a temporary tool to share information, independent of the level of formality envisioned for each document.

Beyond documentation purposes, the RFCs also embedded a ‘hope to promote the exchange and discussion of considerably less than authoritative ideas’ (Crocker 1969). In December 1970, the NWG completed the first interconnection protocol, the Network Control Protocol (NCP). The protocols used started to be documented in a series called RFCs, which became the standard decision-making procedure in the Internet Engineering Task Force (IETF), a body created in 1986 to oversee the development of protocols for the first layer of internetworking. Over time, the RFC became an anchoring practice around which the community coalesced, as discussed towards the end of this chapter.

On 29 October 1969, the first ARPANET link was established between UCLA and the Stanford Research Institute. The latter remains central to the history of ARPANET, hosting the first formal coordination body, the Network Information Center (NIC) established in 1971 at the SRI Augmentation Research Center (Engelbart’s lab) in Menlo Park, California. Starting in 1972, it was led by Elizabeth J. Feinler, known as ‘Jake’, who managed it under a contract with the Department of Defense (DoD). In its early days, the NIC handled user services (via phone and conventional mail at first) and maintained a directory of people (‘white pages’), resources (‘yellow pages’), and protocols. Once the network expanded, the NIC started registering terminals and financial information, such as auditing and billing.

A number of Internet pioneers discussed the open, relaxed atmosphere of work at the outset, 5 rather unusual under contracts with the DoD. The involvement of young graduates on par with military staff indicated the importance given to the experiment. As only a small number of people had access to this project, no in-built security was prioritized in the early days of the network. Notably, ARPANET was not restricted to military use. Access to the network was limited to ARPA contractors, yet those who had permission to work were not under rigorous scrutiny. Nonetheless, there was a clear recognition among researchers and especially among managers that what was at stake was more than the development of a research network, as Lukasik revealed:

So in that environment, I would have been hard pressed to plow a lot of money into the network just to improve the productivity of the researchers. The rationale just wouldn’t have been strong enough. What was strong enough was this idea that packet switching would be more survivable, more robust under damage to the network … So I can assure you, to the extent that I was signing the checks, which I was from 1967 on, I was signing them because that was the need I was convinced of. (Waldrop 2001, 279–80)

Despite its heavy DoD funding, ARPANET never functioned as a military network in the strict sense, with the exception of a few international connections, such as the one with Norway, limited to defence use. As Townes (2012) shows, some elements of the research conducted at the time on ARPANET were kept outside of the reports to the funding authorities. For example, the transnational spread of the network was constantly minimized in order to stay within the scope of the military mandate. The British and Norwegian nodes of the network were not represented in one of the most reproduced maps of the ARPANET published in 1985, and a footnote explained that experimental satellite connections were not shown on the map. Back in 1972, the Defense Communications Agency (DCA) established another packet switching network—WIN—used for operational command and control purposes. It was around that time that the idea of transferring control of ARPANET to a private organization consolidated (Abbate 1999).

The work environment remained open all throughout the ARPANET experiment, with scientists taking the lead for developments and funding streams. Part of it had to do with the research tradition and the technical challenges, meaning that there were frequent exchanges about what worked, what had to be fixed, and what could be improved. The developments that would come on top of this were not envisioned at that point, therefore the scientists working on it preferred an open format (Crocker 2012). However, political sensitivities existed; some were carefully mediated by those in charge, as an endeavour to create a community of practice that gave no attention to what was happening outside the technical space. As Elizabeth Feinler explains:

In the early days we put out the directory, which was sort of a phone book of the internet. And there were a lot of military people, there were a lot of graduate students, so there was a spectrum of users and developers. In the 1970s, there were [ … ] lots of strong feelings about the Vietnam war and what not. So I took it upon myself not to put anybody’s title in the directory, so that meant that everyone was talking to everybody and they didn’t know whom they were talking to. (Feinler 2012)

While the concept of internetworking was developed at ARPA (Leiner et al. 2009), linking computers in a network was an experiment tried in several other parts of the world, most importantly in France and the United Kingdom, where packet switching technologies were tested in the early 1970s. In 1971, plans for a European Informatics Network for research and scientific purposes under the direction of Derek Barber from NPL were announced by the European Common Market. That same year, at the French Research Laboratory IRIA, Louis Pouzin launched the Cyclades packet switched system based on datagrams. Despite concrete advancements in Pouzin’s project, the funding from the French government was discontinued at the end of 1978.

The ARPANET project provided inspiration for a number of similar projects in other parts of the world. While physical connections were only established directly with Europe (first with Norway and the United Kingdom), academic networks were set up in Australia and later in Japan. By 1980, six main networking experiments were underway 6 and by 1988 their number more than doubled.

While the overwhelming majority had an academic purpose, the networks were generally subsidised by states. A few internetworking experiments, such as USENET, EUNET, BITNET, FIDONET, and EARN received direct user contributions.

In October 1972, scientists working on packet switching networks on both sides of the Atlantic convened at the first International Conference on Computer Communication held in Washington. ARPANET was successfully tested publicly, connecting twenty-nine sites in a demonstration organized by Robert E. Kahn of BBN (Townes 2012, 49). A group of network designers volunteered to explore how these networks could be interconnected in the framework of a newly established International Packet Network Working Group (INWG), similar to the ARPANET NWG, using the request for comments format for distributing the INWG notes. DARPA’s Larry Roberts proposed to share the notes via the ARPANET NIC, and Vint Cerf, a graduate student working on one of the first ARPANET nodes at UCLA, volunteered to be temporary chairman. The group divided into two subgroups to consider ‘Communication System Requirements’ and ‘HOST-HOST Protocol Requirements’. In June 1973, the first international node to the ARPANET was established, via satellite link, at Kjeller in Norway, in turn providing a cable link to University College London in the United Kingdom shortly after (Bing 2009). 7

In 1972, Robert Kahn joined the ARPA team to develop network technologies and to initiate the billion-dollar Strategic Computing Program, the largest computer research and development program funded by the US government. Kahn played a key role in the development of the ARPANET and is credited for the open-architecture networking and for coining the phrase ‘National Information Infrastructure’. In 1973, together with Cerf, by then an assistant professor at Stanford, Kahn developed the Transmission Control Protocol (TCP), which encapsulated and decapsulated messages sent over the network, with gateways able to read the capsules, but not the content, decrypted only on end-computers. This protocol, meant to replace the ARPANET’s original NCP, was presented in the paper published in April 1974 and entitled ‘A Protocol for Packet Networks Intercommunication’. Working on the datagram network and a connectionless packet switching protocol, the French scientist Louis Pouzin joined Vint Cerf and his colleagues at INWG to propose a transport protocol across different networks. In 1975, they submitted their proposal to the standard-setting body in charge of telecommunications, the International Telegraph and Telephone Consultative Committee (CCITT).

Private Initiative and Competing Protocols

In parallel with the work conducted at ARPA, major computer companies in the United States proposed their own proprietary products, such as IBM’s Systems Network Architecture, Xerox’s Network Services, or Digital Equipment Corporation’s DECNET, which were all in operation in the mid-1970s. It is around that time that IBM, Xerox, and several national European post, telephone, and telegraph organizations (PTT)—functioning as monopolies at the national level—proposed their own packet-switched common-user data networks, for example in the United Kingdom, France, and Norway. These were based on ‘virtual circuits’, able to make use of the routines of circuit switching employed by telephone exchanges. The virtual circuits solution and TCP/IP had a different architecture and were proposed by distinct groups of specialists: on the one hand, there were the engineers and scientists that worked on voice telecommunications; on the other, computer scientists explored data traffic via the transmission control protocol. Their references and terminology were different, they attended different conferences and they read other journals. There was scepticism in both camps regarding the technological upgrades needed to make packets communicate effectively.

In 1977, representatives of the British computer industry, supported by the US and French representatives, called for the establishment of a committee for packet switching standards within the International Organization for Standardization (ISO), an independent nongovernmental association whose work did not focus exclusively on telecommunications. The Open Systems Interconnection (OSI) committee was set in place and led by Charles Bachman, the American developer of a database management system called Integrated Data Store. After long negotiations, two camps consolidated within the OSI committee: on one side, Bachman and former members of the INWG pushed for the Pouzin-inspired connectionless protocols, whereas the IBM representatives and some of the industry delegates favoured the ‘virtual circuits’ option. Their proposed interconnection solution, designed as a universal standard, was published by the CCITT in its Recommendation X.25 and became the international standard. This standard required a reliable network, unlike what Cerf and Pouzin proposed. Their solution did not place any substantial function on the network and ensured that processing was performed directly at the edges, on end-computers (McCarthy 2015). The work on the TCP continued amidst international negotiations for the adopted standards.

At the outset, the developments at ARPA and those originating in private computer labs remained completely separate. A few years passed before the important advances in different camps would converge, in particular to bridge the private–public gap. The email system was developed by Raymond Tomlinson from BBN in 1972, while the Ethernet system was the outcome of the work of Robert Metcalfe 8 and his team at Xerox’s Palo Alto Research Center in 1977. That year, the Apple II personal computer (PC) was launched at the West Coast Computer Fair, offering, for the first time, a ready-made unit, 9 easy to access and operate. The Apple II PC was accompanied by a reference manual detailing its source code and providing machine specifications. This trend for publishing the source code was also followed by IBM, when their first PC was released in 1981 (Ryan 2010). A number of other services were made available to go along with developments in PCs, including network mailing lists and multiplayer games (e.g. Adventure). The first mobile phones were also developed in the 1970s. Moreover, the UNIX operating system, with its kernel in C programming, was publicly released outside the AT&T’s Bell Labs in October 1973 and became widely adopted by programmers as a portable, multitasking, and multi-user configuration.

By the mid-1970s, a number of technical breakthroughs from private labs started to be integrated into ARPANET through its contractor network. Among these, the case of the UNIX operating system is poignant. UNIX was developed at Bell Labs in the early 1970s and quickly became widespread in universities as its source code was made available, allowing computer scientists to experiment with different features. In 1975, Ken Thompson from Bell Labs took a sabbatical as visiting professor at Berkeley, where he contributed to installing Version 6 Unix and began a Pascal implementation project on computers bought with money from the Ingres database project. Version 6 UNIX was further developed by two graduate students, Chuck Haley and Bill Joy, and publicly released as part of the Berkeley Software Distribution (BSD) in 1978. By 1981, (D)ARPA was funding the Computer Systems Research Group at UC-Berkeley to produce a version of BSD that would integrate TCP/IP, to be released publicly in August 1983.

Similarly, the Data Encryption Standard developed at IBM for businesses received the endorsement of the National Bureau of Standards in 1977, making available to a wider public what was formerly proprietary information. Local area networks such as Ethernet and dial-up connections at a maximum speed of 64 Kbps became more widely spread in the 1980s. In 1981, IBM started selling their first PC with the following specifications: 4.77 MHz Intel 8088 microprocessor, 16 kb of memory (expandable to 256 k), two 160 k floppy disk drives, and an optional colour monitor. Its price started at US$1,565 and it was the ‘first to be built from off-the-shelf parts and marketed by outside distributors’ (Bing 2009, 34).

As access to computers grew, the OSI work got rapid traction among computer vendors like IBM and garnered political support from national governments, including from the European Economic Community. By 1985, CERN opened a ‘TCP/IP Coordinator’ position as part of a formal agreement, which restricted the use of TCP/IP to the CERN site and mandated the ISO protocol for external connections (until 1989). According to Ben Segal, who occupied the position until 1988, the Internet protocol was introduced at CERN a few years before via the Berkeley UNIX system. Around that time, CERN became the Swiss backbone for USENET, the UNIX users’ network that carried most of the email and news between the US side and the European side, EUnet.

Notably, the US government was also among the first adopters of the OSI standard. In 1985, two years after the publication of the ISO 7498 international standard, the US National Research Council recommended that the ARPANET move from TCP/IP to OSI; by the same token, in 1988, the Department of Commerce requested that the OSI standard be implemented on all US government computers after August 1990.

TCP/IP and the Birth of the Internet

As of 1977, the TCP was used for cross-network connections at ARPA. The Internet Protocol (IP) was added a year later to facilitate the routing of messages. The IP solved the problem of locating computers in a network, by designating them concomitantly as both ‘hosts’ and ‘receivers’. Each connected device was assigned a unique 32-bit number (represented in dotted decimal form: 92.123.44.92) that a user could employ to send a message to his or her desired destination. In the early days, each computer was also given a name, in addition to a corresponding IP address. Each computer received a copy of a database (hosts.txt) file, so a user would be able to copy the numeric address into the designated header of the message before sending it. The ‘hosts.txt’, performing a similar function to that of a phone book, together with a list of technical parameters, was maintained at the NIC based at the Stanford Research Institute and was managed by Jon Postel at the Information Sciences Institute at the University of Southern California. This set of functions later evolved into the so-called Internet Assigned Numbers Authority (IANA) functions, playing a key role in future political disputes, as detailed in Chapter 4.

Despite increased complexity as the network grew bigger and bigger, the tasks continued to be performed by individuals. Between 1977 and 1982, a set of technical documents entitled ‘Internet Experiment Notes’ (IENs) were released in order to discuss the implementation of Kahn–Cerf protocols, modelled on the RFC series that Crocker initiated at ARPANET. Jon Postel helped to revise the TCP/IP version in 1978 and again in 1979. The specifications of the protocol were open to everyone. In 1979, ARPA founded the Internet Configuration Control Board (ICCB) to assist with TCP/IP software creation. The editor of IENs was Jon Postel, and about 206 documents were published in the series before it was discontinued.

ARPA’s TCP/IP network became known as the ‘Internet’. In 1981, the TCP/IP was integrated into the Berkeley version of UNIX developed by Bill Joy, thus expanding the reach of the ARPA-born communication protocol. Looking back at the early days, Vint Cerf located the birth of the Internet on 1 January 1983, when the transition plan to migrate the 400 hosts of the ARPANET to TCP/IP was completed. That year, the domain name system (DNS) was invented by Paul Mockapetris, together with Jon Postel and Craig Partridge and was announced in RFC 882. The DNS converted IP addresses consisting of numbers only into letters and words that could be easily remembered by Internet users. The DNS represented a hierarchical system allowing for instant database queries and information retrieval for turning names into numbers and replicating the structure at each level: the 2nd-level domain maintains a name server containing the zone file with the IP Addresses for all 3rd-level domains. In the early days, SRI was one of three hosts of the root zone file. 10 According to RFC 920 from 1984, the initial set of generic top level domains (gTLDs) included .com, .edu, .gov, .mil, .org, with .net being added later. In 1988, .int was introduced for international organizations, following a request from NATO.

The expansion and growth of the ARPANET was no longer easy to contain. By 1983 it had over 100 nodes and was further divided into two parts: an operational component, the military network (MILNET), to serve the operational needs of the DoD, and a research component that retained the ARPANET name. After the network split, the MILNET expanded, and it reached over 250 nodes within a year. In 1985, two important decisions were made: first, two-letter country-code top level domains (cc-TLDs) specific to each jurisdiction were incorporated in the DNS, based on a pre-defined ISO 3166-1 list; 11 second, the adoption of the DNS was made mandatory by ARPA. A year later the general adoption of the DNS was ensured at a major congress held on the West Coast in the presence of all major network representatives (Hafner and Lyon 1999). At that point, the running cost of ARPANET was around $14 million per year (McCarthy 2015) and its decommissioning was in sight. By 1989, the early packet switching network was dismantled into smaller networks (detailed in Table 1), most of which were moved under the local administration of universities.