Which initially made the internet widely accessible
A strength of the ARPA style was that it not only produced artifacts that furthered its missions but also built and trained a community of researchers. In addition to holding regular meetings of principal investigators, Taylor started the "ARPA games," meetings that brought together the graduate students involved in programs.
This innovation helped build the community that would lead the expansion of the field and growth of the Internet during the s. During the s, a number of researchers began to investigate the technologies that would form the basis for computer networking.
Most of this early networking research concentrated on packet switching, a technique of breaking up a conversation into small, independent units, each of which carries the address of its destination and is routed through the network independently. Specialized computers at the branching points in the network can vary the route taken by packets on a moment-to-moment basis in response to network congestion or link failure. One of the earliest pioneers of packet switching was Paul Baran of the RAND Corporation, who was interested in methods of organizing networks to withstand nuclear attack.
Baran proposed a richly interconnected set of network nodes, with no centralized control system—both properties of today's Internet. Of course, the United States already had an extensive communications network, the public switched telephone network PSTN , in which digital switches and transmission lines were deployed as early as But the telephone network did not figure prominently in early computer networking.
Computer scientists working to interconnect their systems spoke a different language than did the engineers and scientists working in traditional voice telecommunications. They read different journals, attended different conferences, and used different terminology. Moreover, data traffic was and is substantially different from voice traffic. In the PSTN, a continuous connection, or circuit, is set up at the beginning of a call and maintained for the duration.
Computers, on the other hand, communicate in bursts, and unless a number of "calls" can be combined on a single transmission path, line and switching capacity is wasted. Telecommunications engineers were primarily interested in improving the voice network and were skeptical of alternative technologies. According to Taylor, some Bell Laboratories engineers stated flatly in that "packet switching wouldn't work.
The reaction was positive, and Roberts issued a request for quotation RFQ for the construction of a four-node network. The contract to produce the hardware and software was issued in December The BBN group was led by Frank Heart, and many of the scientists and engineers who would make major contributions to networking in future years participated.
The network hardware consisted of a rugged military version of a Honeywell Corporation minicomputer that connected a site's computers to the communication lines. These interface message processors IMPs —each the size of a large refrigerator and painted battleship gray—were highly sought after by DARPA-sponsored researchers, who viewed possession of an IMP as evidence they had joined the inner circle of networking research. Kleinrock had published some of the. Two more nodes were soon installed at the University of California at Santa Barbara, where Glen Culler and Burton Fried had developed an interactive system for mathematics education, and the University of Utah, which had one of the first computer graphics groups.
Initially, the ARPANET was primarily a vehicle for experimentation rather than a service, because the protocols for host-to-host communication were still being developed. Although a few experiments in resource sharing were carried out, and the Telnet protocol was developed to allow a user on one machine to log onto another machine over the network, other applications became more popular.
This protocol enabled a user on one system to connect to another system for the purpose of either sending or retrieving a particular file. The concept of an anonymous user was quickly added, with constrained access privileges, to allow users to connect to a system and browse the available files. Using Telnet, a user could read the remote files but could not do anything with them.
With FTP, users could now move files to their own machines and work with them as local files. This capability spawned several new areas of activity, including distributed client-server computing and network-connected file systems.
Occasionally in computing, a "killer application" appears that becomes far more popular than its developers expected. When personal computers PCs became available in the s, the spreadsheet initially VisiCalc was the application that accelerated the adoption of the new hardware by businesses. Tomlinson had built an.
By combining the immediacy of the telephone with the precision of written communication, e-mail became an instant hit. Tomlinson's syntax user domain remains in use today. Telnet, FTP, and e-mail were examples of the leverage that research typically provided in early network development.
As each new capability was added, the efficiency and speed with which knowledge could be disseminated improved. E-mail and FTP made it possible for geographically distributed researchers to collaborate and share results much more effectively.
These programs were also among the first networking applications that were valuable not only to computer scientists, but also to scholars in other disciplines. The agency also supported research on terrestrial packet radio and packet satellite networks. In , Robert Kahn and Vinton Cerf began to consider ways to interconnect these networks, which had quite different bandwidth, delay, and error properties than did the telephone lines of the ARPANET.
This protocol suite defined the packet format and a flow-control and error-recovery mechanism to allow the hosts to recover gracefully from network errors. It also specified an addressing mechanism that could support an Internet comprising up to 4 billion hosts. Around this time, two phenomena—the development of local area networks LANs and the integration of networking into operating systems—contributed to a rapid increase in the size of the network. Like the ARPANET group, they wanted to provide remote access to their main computer system, but instead of a network of telephone lines, they used a shared radio network.
It was shared in the sense that all stations used the same channel to reach the central station. This approach had a potential drawback: if two stations attempted to transmit at the same time, then their transmissions would interfere with each other, and neither one would be received.
But such interruptions were unlikely because the data were typed on keyboards, which sent very short pulses to the computer, leaving ample time between pulses during which the channel was clear to receive keystrokes from a different user. Abramson's system, known as Aloha, generated considerable interest in using a shared transmission medium, and several projects were initiated to build on the idea. The packet satellite network demonstrated that the protocols developed in Aloha for handling contention between simultaneous users, combined with more traditional reservation schemes, resulted in efficient use of the available bandwidth.
However, the long latency inherent in satellite communications limited the usefulness of this approach. This experiment demonstrated that using coaxial cable as a shared medium resulted in an efficient network.
Unlike the Aloha system, in which transmitters could not receive any signals, Ethernet stations could detect that collisions had occurred, stop transmitting immediately, and retry a short time later at random.
This approach improved the efficiency of the Aloha technique and made it practical for actual use. Shared-media LANs became the dominant form of computer-to-computer communication within a building or local area, although variations from IBM Token Ring and others also captured part of this emerging market.
Until the s, academic computer science research groups used a variety of computers and operating systems, many of them constructed by the researchers themselves. Most were time-sharing systems that supported a number of simultaneous users. This standardization enabled researchers at different sites to share software, including networking software.
By the late s, the Unix operating system, originally developed at Bell Labs, had become the system of choice for researchers, because it ran on DEC's inexpensive relative to other systems VAX line of computers.
The BSD was rapidly adopted by the research community because the availability of source code made it a useful experimental tool. In addition, it ran on both VAX machines and the personal workstations provided by the fledgling Sun Microsystems, Inc. Unlike the various telecommunications networks, the Internet has no owner.
To become part of the Internet, a user need only connect a computer to a port on a service provider's router, obtain an IP address, and begin communicating. To add an entire network to the Internet is a bit trickier, but not extraordinarily so, as demonstrated by the tens of thousands of networks with tens of millions of hosts that constitute the Internet today. The primary technical problem in the Internet is the standardization of its protocols. The NWG defined the system of requests for comments RFCs that are still used to specify protocols and discuss other engineer-.
Today's RFCs are still formatted as they were in , eschewing the decorative fonts and styles that pervade today's Web. Joining the IETF is a simple matter of asking to be placed on its mailing list, attending thrice-yearly meetings, and participating in the work. This grassroots group is far less formal than organizations such as the International Telecommunications Union, which defines telephony standards through the work of members who are essentially representatives of various governments.
The open approach to Internet standards reflects the academic roots of the network. The s were a time of intensive research in networking. Much of the technology used today was developed during this period. Most of the work was funded by ARPA, although the NSF provided educational support for many researchers and was beginning to consider establishing a large-scale academic network.
During this period, ARPA pursued high-risk research with the potential for high payoffs. It is debatable whether a more risk-averse organization lacking the hands-on program management style of ARPA could have produced the same result. It remained in operation until , when it was superseded by subsequent networks. The stage was now set for the Internet, which was first used by scientists, then by academics in many disciplines, and finally by the world at large.
During the late s, several networks were constructed to serve the needs of particular research communities. The NSF began supporting network infrastructure with the establishment. Because the NSFNET was to be an internet the beginning of today's Internet , specialized computers called routers were needed to pass traffic between networks at the points where the networks met.
Today, routers are the primary products of multibillion-dollar companies e. Working with ARPA support, Mills improved the protocols used by the routers to communicate the network topology among themselves, a critical function in a large-scale network. An administrative entity, such as a university department, can assign host names as it wishes. It also has a domain name, issued by the higher-level authority of which it is a part.
Thus, a host named xyz in the computer science department at UC-Berkeley would be named xyz. Servers located throughout the Internet provide translation between the host names used by human users and the IP addresses used by the Internet protocols. The name-distribution scheme has allowed the Internet to grow much more rapidly than would be possible with centralized administration.
Jennings left the NSF in During Wolff's tenure, the speed of the backbone, originally 56 kilobits per second, was increased 1,fold, and a large number of academic and regional net-. These groups proposed to develop regional networks with a single connection to the NSFNET, instead of connecting each institution independently. The NSF agreed to provide seed funding for connecting regional networks to the NSFNET, with the expectation that, as a critical mass was reached, the private sector would take over the management and operating costs of the Internet.
This decision helped guide the Internet toward self-sufficiency and eventual commercialization Computer Science and Telecommunications Board, Wolff saw that commercial interests had to participate and provide financial support if the network were to continue to expand and evolve into a large, single internet.
Instead of reworking the existing backbone, ANS added a new, privately owned backbone for commercial services in By the early s, the Internet was international in scope, and its operation had largely been transferred from the NSF to commercial providers. Public access to the Internet expanded rapidly thanks to the ubiquitous nature of the analog telephone network and the availability of modems for connecting computers to this network.
Digital transmission became possible throughout the telephone network with the deployment of optical fiber, and the telephone companies leased their broadband digital facilities for connecting routers and regional networks to the developers of the computer network.
In April , all commercialization restrictions on the Internet were lifted. Although still primarily used by academics and businesses, the Internet was growing, with the number of hosts reaching , Then the invention of the Web catapulted the Internet to mass popularity almost overnight.
Because the unique names called universal resource locators, or URLs are long, including the DNS name of the host on which they are stored, URLs would be represented as shorter hypertext links in other documents. When the user of a browser clicks a mouse on a link, the browser retrieves and displays the document named by the URL. Kahn first contemplated developing a protocol local only to the packet radio network, since that would avoid having to deal with the multitude of different operating systems, and continuing to use NCP.
If any packets were lost, the protocol and presumably any applications it supported would come to a grinding halt. In this model NCP had no end-end host error control, since the ARPANET was to be the only network in existence and it would be so reliable that no error control would be required on the part of the hosts. Thus, Kahn decided to develop a new version of the protocol which could meet the needs of an open-architecture network environment. While NCP tended to act like a device driver, the new protocol would be more like a communications protocol.
At this point he realized it would be necessary to learn the implementation details of each operating system to have a chance to embed any new protocols in an efficient way. Thus, in the spring of , after starting the internetting effort, he asked Vint Cerf then at Stanford to work with him on the detailed design of the protocol. Cerf had been intimately involved in the original NCP design and development and already had the knowledge about interfacing to existing operating systems.
Subsequently a refined version was published in 7. Kahn had intended that the TCP protocol support a range of transport services, from the totally reliable sequenced delivery of data virtual circuit model to a datagram service in which the application made direct use of the underlying network service, which might imply occasional lost, corrupted or reordered packets.
However, the initial effort to implement TCP resulted in a version that only allowed for virtual circuits. This model worked fine for file transfer and remote login applications, but some of the early work on advanced network applications, in particular packet voice in the s, made clear that in some cases packet losses should not be corrected by TCP, but should be left to the application to deal with.
This led to a reorganization of the original TCP into two protocols, the simple IP which provided only for addressing and forwarding of individual packets, and the separate TCP, which was concerned with service features such as flow control and recovery from lost packets. Connecting the two together was far more economical that duplicating these very expensive computers.
However, while file transfer and remote login Telnet were very important applications, electronic mail has probably had the most significant impact of the innovations from that era.
Email provided a new model of how people could communicate with each other, and changed the nature of collaboration, first in the building of the Internet itself as is discussed below and later for much of society.
A key concept of the Internet is that it was not designed for just one application, but as a general infrastructure on which new applications could be conceived, as illustrated later by the emergence of the World Wide Web. The Stanford team, led by Cerf, produced the detailed specification and within about a year there were three independent implementations of TCP that could interoperate. This was the beginning of long term experimentation and development to evolve and mature the Internet concepts and technology.
Beginning with the first three networks ARPANET, Packet Radio, and Packet Satellite and their initial research communities, the experimental environment has grown to incorporate essentially every form of network and a very broad-based research and development community.
When desktop computers first appeared, it was thought by some that TCP was too big and complex to run on a personal computer. That implementation was fully interoperable with other TCPs, but was tailored to the application suite and performance objectives of the personal computer, and showed that workstations, as well as large time-sharing systems, could be a part of the Internet. It included an emphasis on the complexity of protocols and the pitfalls they often introduce. This book was influential in spreading the lore of packet switching networks to a very wide community.
This change from having a few networks with a modest number of time-shared hosts the original ARPANET model to having many networks has resulted in a number of new concepts and changes to the underlying technology. First, it resulted in the definition of three network classes A, B, and C to accommodate the range of networks. Class A represented large national scale networks small number of networks with large numbers of hosts ; Class B represented regional scale networks; and Class C represented local area networks large number of networks with relatively few hosts.
A major shift occurred as a result of the increase in scale of the Internet and its associated management issues. To make it easy for people to use the network, hosts were assigned names, so that it was not necessary to remember the numeric addresses. Originally, there were a fairly limited number of hosts, so it was feasible to maintain a single table of all the hosts and their associated names and addresses.
The shift to having a large number of independently managed networks e. The DNS permitted a scalable distributed mechanism for resolving hierarchical host names e. The increase in the size of the Internet also challenged the capabilities of the routers.
Originally, there was a single distributed algorithm for routing that was implemented uniformly by all the routers in the Internet. As the number of networks in the Internet exploded, this initial design could not expand as necessary, so it was replaced by a hierarchical model of routing, with an Interior Gateway Protocol IGP used inside each region of the Internet, and an Exterior Gateway Protocol EGP used to tie the regions together.
This design permitted different regions to use a different IGP, so that different requirements for cost, rapid reconfiguration, robustness and scale could be accommodated. Not only the routing algorithm, but the size of the addressing tables, stressed the capacity of the routers. New approaches for address aggregation, in particular classless inter-domain routing CIDR , have recently been introduced to control the size of router tables. As the Internet evolved, one of the major challenges was how to propagate the changes to the software, particularly the host software.
Looking back, the strategy of incorporating Internet protocols into a supported operating system for the research community was one of the key elements in the successful widespread adoption of the Internet. This enabled defense to begin sharing in the DARPA Internet technology base and led directly to the eventual partitioning of the military and non- military communities. Thus, by , Internet was already well established as a technology supporting a broad community of researchers and developers, and was beginning to be used by other communities for daily computer communications.
Electronic mail was being used broadly across several communities, often with different systems, but interconnection between different mail systems was demonstrating the utility of broad based electronic communications between people. At the same time that the Internet technology was being experimentally validated and widely used amongst a subset of computer science researchers, other networks and networking technologies were being pursued. The usefulness of computer networking — especially electronic mail — demonstrated by DARPA and Department of Defense contractors on the ARPANET was not lost on other communities and disciplines, so that by the mids computer networks had begun to spring up wherever funding could be found for the purpose.
The U. NSFNET programs to explicitly announce their intent to serve the entire higher education community, regardless of discipline. Indeed, a condition for a U. When Steve Wolff took over the NSFNET program in , he recognized the need for a wide area networking infrastructure to support the general academic and research community, along with the need to develop a strategy for establishing such infrastructure on a basis ultimately independent of direct federal funding.
Policies and strategies were adopted see below to achieve that end. It had seen the Internet grow to over 50, networks on all seven continents and outer space, with approximately 29, networks in the United States.
A key to the rapid growth of the Internet has been the free and open access to the basic documents, especially the specifications of the protocols. The beginnings of the ARPANET and the Internet in the university research community promoted the academic tradition of open publication of ideas and results. However, the normal cycle of traditional academic publication was too formal and too slow for the dynamic exchange of ideas essential to creating networks. In a key step was taken by S.
These memos were intended to be an informal fast distribution way to share ideas with other network researchers. At first the RFCs were printed on paper and distributed via snail mail. Jon Postel acted as RFC Editor as well as managing the centralized administration of required protocol number assignments, roles that he continued to play until his death, October 16, When some consensus or a least a consistent set of ideas had come together a specification document would be prepared.
Such a specification would then be used as the base for implementations by the various research teams. The open access to the RFCs for free, if you have any kind of a connection to the Internet promotes the growth of the Internet because it allows the actual specifications to be used for examples in college classes and by entrepreneurs developing new systems.
Email has been a significant factor in all areas of the Internet, and that is certainly true in the development of protocol specifications, technical standards, and Internet engineering. The very early RFCs often presented a set of ideas developed by the researchers at one location to the rest of the community. After email came into use, the authorship pattern changed — RFCs were presented by joint authors with common view independent of their locations.
The use of specialized email mailing lists has been long used in the development of protocol specifications, and continues to be an important tool. The IETF now has in excess of 75 working groups, each working on a different aspect of Internet engineering.
Each of these working groups has a mailing list to discuss one or more draft documents under development. When consensus is reached on a draft document it may be distributed as an RFC. This unique method for evolving new capabilities in the network will continue to be critical to future evolution of the Internet. The Internet is as much a collection of communities as a collection of technologies, and its success is largely attributable to both satisfying basic community needs as well as utilizing the community in an effective way to push the infrastructure forward.
The early ARPANET researchers worked as a close-knit community to accomplish the initial demonstrations of packet switching technology described earlier. Likewise, the Packet Satellite, Packet Radio and several other DARPA computer science research programs were multi-contractor collaborative activities that heavily used whatever available mechanisms there were to coordinate their efforts, starting with electronic mail and adding file sharing, remote access, and eventually World Wide Web capabilities.
In the late s, recognizing that the growth of the Internet was accompanied by a growth in the size of the interested research community and therefore an increased need for coordination mechanisms, Vint Cerf, then manager of the Internet Program at DARPA, formed several coordination bodies — an International Cooperation Board ICB , chaired by Peter Kirstein of UCL, to coordinate activities with some cooperating European countries centered on Packet Satellite research, an Internet Research Group which was an inclusive group providing an environment for general exchange of information, and an Internet Configuration Control Board ICCB , chaired by Clark.
In , when Barry Leiner took over management of the Internet research program at DARPA, he and Clark recognized that the continuing growth of the Internet community demanded a restructuring of the coordination mechanisms.
The ICCB was disbanded and in its place a structure of Task Forces was formed, each focused on a particular area of the technology e. It of course was only a coincidence that the chairs of the Task Forces were the same people as the members of the old ICCB, and Dave Clark continued to act as chair.
This growth was complemented by a major expansion in the community. In addition to NSFNet and the various US and international government-funded activities, interest in the commercial sector was beginning to grow. As a result, the IAB was left without a primary sponsor and increasingly assumed the mantle of leadership.
The growth in the commercial sector brought with it increased concern regarding the standards process itself. Increased attention was paid to making the process open and fair. In , yet another reorganization took place. In , the Internet Activities Board was re-organized and re-named the Internet Architecture Board operating under the auspices of the Internet Society.
The recent development and widespread deployment of the World Wide Web has brought with it a new community, as many of the people working on the WWW have not thought of themselves as primarily network researchers and developers.
Thus, through the over two decades of Internet activity, we have seen a steady evolution of organizational structures designed to support and facilitate an ever-increasing community working collaboratively on Internet issues. Commercialization of the Internet involved not only the development of competitive, private network services, but also the development of commercial products implementing the Internet technology.
Unfortunately they lacked both real information about how the technology was supposed to work and how the customers planned on using this approach to networking. The speakers came mostly from the DARPA research community who had both developed these protocols and used them in day-to-day work.
About vendor personnel came to listen to 50 inventors and experimenters. The results were surprises on both sides: the vendors were amazed to find that the inventors were so open about the way things worked and what still did not work and the inventors were pleased to listen to new problems they had not considered, but were being discovered by the vendors in the field.
Thus a two-way discussion was formed that has lasted for over a decade. In September of the first Interop trade show was born. It did. The Interop trade show has grown immensely since then and today it is held in 7 locations around the world each year to an audience of over , people who come to learn which products work with each other in a seamless manner, learn about the latest products, and discuss the latest technology. Starting with a few hundred attendees mostly from academia and paid for by the government, these meetings now often exceed a thousand attendees, mostly from the vendor community and paid for by the attendees themselves.
The reason it is so useful is that it is composed of all stakeholders: researchers, end users and vendors. Network management provides an example of the interplay between the research and commercial communities. In the beginning of the Internet, the emphasis was on defining and implementing protocols that achieved interoperation. As the network grew larger, it became clear that the sometime ad hoc procedures used to manage the network would not scale.
Manual configuration of tables was replaced by distributed automated algorithms, and better tools were devised to isolate faults. In it became clear that a protocol was needed that would permit the elements of the network, such as the routers, to be remotely managed in a uniform way. The market could choose the one it found more suitable. SNMP is now used almost universally for network-based management.
In the last few years, we have seen a new phase of commercialization. Originally, commercial efforts mainly comprised vendors providing the basic networking products, and service providers offering the connectivity and basic Internet services. This has been tremendously accelerated by the widespread and rapid adoption of browsers and the World Wide Web technology, allowing users easy access to information linked throughout the globe. Products are available to facilitate the provisioning of that information and many of the latest developments in technology have been aimed at providing increasingly sophisticated information services on top of the basic Internet data communications.
This definition was developed in consultation with members of the internet and intellectual property rights communities. The Internet has changed much in the two decades since it came into existence.
It was conceived in the era of time-sharing, but has survived into the era of personal computers, client-server and peer-to-peer computing, and the network computer.
It was designed before LANs existed, but has accommodated that new network technology, as well as the more recent ATM and frame switched services. It was envisioned as supporting a range of functions from file sharing and remote login to resource sharing and collaboration, and has spawned electronic mail and more recently the World Wide Web. But most important, it started as the creation of a small band of dedicated researchers, and has grown to be a commercial success with billions of dollars of annual investment.
One should not conclude that the Internet has now finished changing. The Internet, although a network in name and geography, is a creature of the computer, not the traditional network of the telephone or television industry. It will, indeed it must, continue to change and evolve at the speed of the computer industry if it is to remain relevant.
0コメント