The History of Networking

Google+ Pinterest LinkedIn Tumblr +

1960s

In the 1960s, computer networking was essentially synonymous with mainframe computing and telephony services, and the distinction between local and wide area networks did not yet exist. Mainframes were typically “networked” to a series of dumb terminals with serial connections running on RS-232 or some other electrical interface. If a terminal in one city needed to connect with a mainframe in another city, a 300-baud long-haul modem would use the existing analog Public Switched Telephone Network (PSTN) to form the connection. The technology was primitive indeed, but it was an exciting time nevertheless.

The quality and reliability of the PSTN increased significantly in 1962 with the introduction of pulse code modulation (PCM), which converted analog voice signals into digital sequences of bits. DS0 (Digital Signal Zero) became the basic 64-Kbps channel, and the entire hierarchy of the digital telephone system was soon built on this foundation. Next, a device called the channel bank was introduced. It took 24 separate DS0 channels and combined them using time-division multiplexing (TDM) into a single 1.544-Mbps channel called DS1 or T1. (In Europe, 30 DS0 channels were combined to make E1.) When the backbone of the Bell system became digital, transmission characteristics improved due to higher quality and less noise. This was eventually extended all the way to local loop subscribers using ISDN. The first commercial touch-tone phone was also introduced in 1962.

The first communication satellite, Telstar, was launched in 1962. This technology did not immediately affect the networking world because of the latency of satellite links compared to undersea cable communications, but it eventually surpassed transoceanic underwater telephone cables (which were first deployed in 1965 and could carry 130 simultaneous conversations) in carrying capacity. In fact, in 1960 scientists at Bell Laboratories transmitted a communication signal coast to coast across the United States by bouncing it off the moon. Unfortunately, the moon wouldn’t sit still! By 1965, the first commercial communication satellites (such as Early Bird) were deployed.

1970s

While the 1960s were the decade of the mainframe, the 1970s gave rise to Ethernet, which today is by far the most popular LAN technology. Ethernet was born in 1973 in Xerox’s research lab in Palo Alto, California. (An earlier experimental network called ALOHAnet was developed in 1970 at the University of Hawaii.) The original Xerox networking system was known as X-wire and worked at 2.94 Mbps. X-wire was experimental and was not used commercially, although a number of Xerox Alto workstations for word processing were networked together in the White House using X-wire during the Carter administration. In 1979, Digital Equipment Corporation (DEC), Intel, and Xerox formed the DIX consortium and developed the specification for standard 10-Mbps Ethernet, or thicknet, which was published in 1980. This standard was revised and additional features were added in the following decade.

The conversion of the backbone of the Bell telephone system to digital circuitry continued during the 1970s and included the deployment in 1974 of the first digital data service (DDS) circuits (then called the Dataphone Digital Service). DDS formed the basis of the later deployment of ISDN and T1 lines to customer premises. AT&T installed its first digital switch in 1976.

ARPANET protocols and technologies continued to evolve using the informal RFC process. In 1972, the Telnet protocol was defined in RFC 318, followed by FTP in 1973 (RFC 454). ARPANET became an international network in 1973 when nodes were added at the University College of London in the United Kingdom and at the Royal Radar Establishment in Norway. ARPANET even established an experimental wireless packet-switching radio service in 1977, which two years later became the Packet Radio Network (PRNET).

Meanwhile, in 1974 the first specification for the Transmission Control Protocol (TCP) was published. Progress on the TCP/IP protocols continued through several iterations until the basic TCP/IP architecture was formalized in 1978, but it wasn’t until 1983 that ARPANET started using TCP/IP as its primary networking protocol instead of NCP.

An important standard to emerge in the 1970s was the public-key cryptography scheme developed in 1976 by Whitfield Diffie and Martin Hellman. This scheme underlies the Secure Sockets Layer (SSL) protocol developed by Netscape Communications, which is now the predominant scheme for ensuring privacy and integrity of financial and other transactions over the World Wide Web (WWW). Without this scheme, popular e-business sites such as Amazon.com would have a hard time attracting customers.

The first personal computer, the Altair, went on the market as a kit in 1975. The Altair was based on the Intel 8080, an 8-bit processor, and came with 256 bytes of memory, toggle switches, and LED lights. While the Altair was basically for hobbyists, the Apple II from Apple Computer, which was introduced in 1977, was much more. A typical Apple II system, which was based on the Motorola 6502 8-bit processor, had 4 KB of RAM, a keyboard, a motherboard with expansion slots, built-in BASIC in ROM, and color graphics. The Apple II quickly became the standard desktop system in schools and other educational institutions. A physics classroom I taught in had one all the way into the early 1990s (limited budget!). However, it wasn’t until the introduction of the IBM Personal Computer (PC) in 1981 that the full potential of personal computers began to be realized, especially in businesses.

In 1975, Bill Gates and Paul Allen licensed their BASIC programming language to MITS, the manufacturer of the Altair. BASIC was the first computer language program specifically written for a personal computer. Gates and Allen coined the name “Micro-soft” for their business partnership, and they officially registered it as a trademark the following year. Microsoft went on to license BASIC to other personal computing platforms such as the Commodore PET and the TRS-80.

1980s

In the 1980s, the growth of client/server LAN architectures continued while that of mainframe computing environments declined. However, the biggest development in the area of LAN networking in the 1980s was the evolution and standardization of Ethernet. While the DIX consortium worked on standard Ethernet in the late 1970s, the IEEE began its Project 802 initiative, which aimed to develop a single, unified standard for all LANs. When it became clear that this was impossible, 802 was divided up into a number of working groups, with 802.3 focusing on Ethernet, 802.4 on Token Bus, and 802.5 on Token Ring technologies and standards. The work of the 802.3 group culminated in 1983 with the release of the IEEE 802.3 10Base5 Ethernet standard, which was called thicknet because it used thick coaxial cable and which was virtually identical to the work already done by DIX. In 1985, this standard was extended as 10Base2 to include thin coaxial cable, commonly called thinnet.

The development of the Network File System (NFS) by Sun Microsystems in 1985 resulted in a proliferation of diskless UNIX workstations with built-in Ethernet interfaces that also drove the demand for Ethernet and accelerated the deployment of bridging technologies for segmenting LANs. Also around 1985, increasing numbers of UNIX machines and LANs were connected to ARPANET, which until that time had been mainly a network of mainframe and minicomputer systems. The first UNIX implementation of TCP/IP came in v4.2 of Berkeley’s BSD UNIX, from which other vendors such as Sun Microsystems quickly ported their versions of TCP/IP.

IBM introduced its Token Ring networking technology in 1985 as an alternative LAN technology to Ethernet. IBM had submitted its technology to the IEEE in 1982 and it was standardized by the 802.5 committee in 1984. IBM soon supported the integration of Token Ring with its existing SNA networking services and protocols for IBM mainframe computing environments. The initial Token Ring specifications delivered data at 1 Mbps and 4 Mbps, but it dropped the 1-Mbps version in 1989 when it introduced a newer 16-Mbps version. Interestingly, no formal IEEE specification exists for 16-Mbps Token Ring—vendors simply adopted IBM’s technology for the product. Since then, advances in the technology have included high-speed 100-Mbps Token Ring and Token Ring switching technologies that support virtual LANs (VLANs). Nevertheless, Ethernet remains far more widely deployed than Token Ring.

Also in the field of local area networking, the American National Standards Institute (ANSI) began standardizing the specifications for Fiber Distributed Data Interface (FDDI) in 1982. FDDI was designed to be a high-speed (100 Mbps) fiber-optic networking technology for LAN backbones on campuses and industrial parks. The final FDDI specification was completed in 1988, and deployment in campus LAN backbones grew during the late 1980s and the early 1990s.

The Signaling System #7 (SS7) digital signaling system was first deployed within the PSTN in the 1980s, first in Sweden and later in the United States. SS7 made new telephony services such as caller ID, call blocking, and automatic callback available to subscribers.

The first trials of ISDN, a fully digital telephony technology that runs on existing copper local loop lines, began in Japan in 1983 and in the United States in 1987. (All major metropolitan areas in the United States have since been upgraded to make ISDN available to those who want it, but ISDN has not caught on as a WAN technology as much as it has in Europe.)

In the 1980s, fiber-optic cabling emerged as a networking and telecommunications medium. In 1988, the first fiber-optic transatlantic undersea cable was laid and increased the capacity of the transatlantic communication system manyfold.

The 1980s also saw the standardization of SONET technology, a high-speed physical layer (PHY) fiber-optic networking technology developed from time-division multiplexing (TDM) digital telephone system technologies. Before the divestiture of AT&T in 1984, local telephone companies had to interface their own TDM-based digital telephone systems with proprietary TDM schemes of long-distance carriers, and incompatibilities created many problems. This provided the impetus for creating the SONET standard, which was finalized in 1989 through a series of CCITT (anglicized as International Telegraph and Telephone Consultative Committee) standards called G.707, G.608, and G.709. By the mid-1990s, almost all long-distance telephone traffic in the United States used SONET on trunk lines as the physical interface.

The 1980s brought the first test implementations of Asynchronous Transfer Mode (ATM) high-speed cell-switching technologies, which could use SONET as the physical interface. Many concepts basic to ATM were developed in the early 1980s at the France-Telecom laboratory in Lannion, France, particularly the PRELUDE project, which demonstrated the feasibility of end-to-end ATM networks running at 62 Mbps. The 53-byte ATM cell format was standardized by the CCITT in 1988, and the new technology was given a further push with the creation of the ATM Forum in 1991. Since then, use of ATM has grown significantly in telecommunications provider networks and has become a high-speed backbone technology in many enterprise-level networks around the world. However, the vision of ATM on users’ desktops has not been realized because of the emergence of cheaper Fast Ethernet and Gigabit Ethernet LAN technologies, and because of the complexity of ATM itself.

A significant milestone in the development of the Internet occurred in 1982 when the networking protocol of ARPANET was switched from NCP to TCP/IP. On January 1, 1983, NCP was turned off permanently—anyone who hadn’t migrated to TCP/IP was out of luck. ARPANET, which connected several hundred systems, was split into two parts, ARPANET and MILNET.

The first international use of TCP/IP took place in 1984 at CERN, a physics research center in Geneva, Switzerland. TCP/IP was designed to provide a way of networking different computing architectures in heterogeneous networking environments. Such a protocol was badly needed because of the proliferation of vendor-specific networking architectures in the preceding decade, including “homegrown” solutions developed at many government and educational institutions. TCP/IP made it possible to connect diverse architectures such as UNIX workstations, VMS minicomputers, and CRAY supercomputers into a single operational network. TCP/IP soon superseded proprietary protocols such as Xerox Network Systems (XNS), ChaosNet, and DECnet. It has since become the de facto standard for internetworking all types of computing systems.

CERN was primarily a research center for high-energy particle physics, but it became an early European pioneer of TCP/IP and by 1990 was the largest subnetwork of the Internet in Europe. In 1989, a CERN researcher named Timothy Berners-Lee developed the Hypertext Transfer Protocol (HTTP) that formed the basis of the World Wide Web (WWW). And all of this developed as a sidebar to the real research that was being done at CERN—slamming together protons and electrons at high speeds to see what fragments appear!

Also important to the development of Internet technologies and protocols was the introduction of the Domain Name System (DNS) in 1984. At that time, ARPANET had more than 1000 nodes, and trying to remember them by their numerical IP address was a headache. NNTP was developed in 1987, and Internet Relay Chant (IRC) was developed in 1988.

In 1986, the National Science Foundation NETwork (NSFNET) was created. NSFNET networked together the five national supercomputing centers using dedicated 56-Kbps lines. The connection was soon seen as inadequate and was upgraded to 1.544-Mbps T1 lines in 1988. In 1987, NSF and Merit Networks agreed to jointly manage the NSFNET, which had effectively become the backbone of the emerging Internet. By 1989, the Internet had grown to more than 100,000 hosts, and the Internet Engineering Task Force (IETF) was officially created to administer its development. In 1990, NSFNET officially replaced the aging ARPANET and the modern Internet was born, with more than 20 countries connected.

Cisco Systems was one of the first companies in the 1980s to develop and market routers for Internet Protocol (IP) internetworks, a business that today is worth billions of dollars and is a foundation of the Internet. Hewlett-Packard was Cisco’s first customer for its routers, which were originally called gateways.

In wireless telecommunications, analog cellular was implemented in Norway and Sweden in 1981. Systems were soon rolled out in France, Germany, and the United Kingdom. The first U.S. commercial cellular phone system, which was named the Advanced Mobile Phone Service (AMPS) and operated in the 800-MHz frequency band, was introduced in 1983. By 1987, the United States had more than 1 million AMPS cellular subscribers, and higher-capacity digital cellular phone technologies were being developed. The Telecommunications Industry Association (TIA) soon developed specifications and standards for digital cellular communication technologies.

A landmark event that was largely responsible for the phenomenal growth in the PC industry (and hence the growth of the client/server model and local area networking) was the release of the first version of Microsoft’s text-based, 16-bit MS-DOS operating system in 1981. Microsoft, which had become a privately held corporation with Bill Gates as president and chairman of the board and Paul Allen as executive vice president, licensed MS-DOS 1 to IBM for its PC. MS-DOS continued to evolve and grow in power and usability until its final version, MS-DOS 6.22, which was released in 1993. One year after the first version of MS-DOS was released in 1981, Microsoft had its own fully functional corporate network, the Microsoft Local Area Network (MILAN), which linked a DEC 206, two PDP-11/70s, a VAX 11/250, and a number of MC68000 machines running XENIX. This setup was typical of heterogeneous computer networks in the early 1980s.

1990s

The 1990s were a busy decade in every aspect of networking, so we’ll only touch on the highlights here. Ethernet continued to dominate LAN technologies and largely eclipsed competing technologies such as Token Ring and FDDI. In 1991, Kalpana Corporation began marketing a new form of bridge called a LAN switch, which dedicated the entire bandwidth of a LAN to a single port instead of sharing it among several ports. Later called Ethernet switches or Layer 2 switches, these devices quickly found a niche in providing dedicated high-throughput links for connecting servers to network backbones.

The rapid growth of computer networks and the rise of bandwidth-hungry applications created a need for something faster than 10-Mbps Ethernet, especially on network backbones. The first full-duplex Ethernet products, offering speeds of 20 Mbps, became available in 1992. In 1995, work began on a standard for full-duplex Ethernet; it was finalized in 1997. A more important development was Grand Junction Networks’ commercial Ethernet bus, introduced in 1992, which functioned at 100 Mbps. Spurred by this commercial advance, the 802.3 group produced the 802.3u 100BaseT Fast Ethernet standard for transmission of data at 100 Mbps over both twisted-pair copper wiring and fiber-optic cabling.

Although the jump from 10-Mbps to 100-Mbps Ethernet took almost 15 years, a year after the 100BaseT Fast Ethernet standard was released work began on a 1000-Mbps version of Ethernet popularly known as Gigabit Ethernet. Fast Ethernet was beginning to be deployed at the desktop, and this was putting enormous strain on the FDDI backbones that were deployed on many commercial and university campuses. FDDI also operated at 100 Mbps (or 200 Mbps if fault tolerance was discarded in favor of carrying traffic on the redundant ring), so a single Fast Ethernet desktop connection could theoretically saturate the capacity of the entire network backbone.

ATM, a broadband cell-switching technology used primarily in WANs and in telecommunications environments, was considered as a possible successor to FDDI for backboning Ethernet networks, and LAN emulation (LANE) was developed to carry LAN traffic such as Ethernet over ATM. However, ATM is more difficult to install and maintain than Ethernet, and a number of companies saw extending Ethernet speeds to 1000 Mbps as a way to provide network backbones with much greater capacity using technology that most network administrators were already familiar with. As a result, the 802 group called 802.3z developed a Gigabit Ethernet standard called 1000BaseX, which it released in 1998. Gigabit Ethernet is now widely deployed, and work is underway on extending Ethernet technologies to 10 Gbps. A competitor of Gigabit Ethernet for high-speed collapsed backbone interconnects, called fibre channel, was conceived by an ANSI committee in 1988 and has become a viable alternative.

The 1990s have seen huge changes in the landscape of telecommunications providers and their services. “Convergence” became a major buzzword, signifying the combining of voice, data, and broadcast information into a single medium for delivery to businesses and consumers through broadband technologies such as Broadband ISDN (B-ISDN), variants of DSL, and cable modem systems. Voice over IP (VoIP) became the avowed goal of many vendors, who promised businesses huge savings by routing voice telephone traffic over IP networks. The technology works, but the bugs are still being ironed out and deployments are still slow.

The Telecommunications Act of 1996 was designed to spur competition in all aspects of the U.S. telecommunications market by allowing the RBOCs access to long-distance services. The result has been an explosion in technologies and services, with mergers and acquisitions changing the nature of the provider landscape. The legal fallout from all this is still settling.

The first public frame relay packet-switching services were offered in North America in 1992. Companies such as AT&T and Sprint installed a network of frame relay nodes across the United States in major cities, where corporate networks could connect to the service through their local telco. Frame relay began to eat significantly into the deployed base of more expensive dedicated leased lines such as the T1 or E1 lines that businesses used for their WAN solutions, resulting in lower prices for these leased lines and greater flexibility of services. In Europe, frame relay has been deployed much more slowly, primarily because of the widespread deployment of packet-switching networks such as X.25.

The cable modem was introduced in 1996, and by the end of the decade broadband residential Internet access through cable television systems had become a strong competitor with telephone-based systems such as Asymmetric Digital Subscriber Line (ADSL) and G.Lite, another variant of DSL.

In 1997, the World Trade Organization (WTO) ratified the Information Technology Agreement (ITA), which mandated that participating governments eliminate all tariffs on information technology products by the next millennium. Other WTO initiatives promise to similarly open up telecommunications markets worldwide.

The decade saw a veritable explosion in the growth of the Internet and the development of Internet technologies. As mentioned earlier, ARPANET was replaced in 1990 by NSFNET, which by then was commonly called the Internet. At the beginning of the 1990s, the Internet’s backbone consisted of 1.544-Mbps T1 lines connecting various institutions, but in 1991 the process of upgrading these lines to 44.735-Mbps T3 circuits began. By the time the Internet Society (ISOC) was chartered in 1992, the Internet had grown to an amazing 1 million hosts on almost 10,000 connected networks. In 1993, the NSF created Internet Network Information Center (InterNIC) as a governing body for DNS. In 1995, the NSF stopped sponsoring the Internet backbone and NSFNET went back to being a research and educational network. Internet traffic in the United States was routed through a series of interconnected commercial network providers.

The first commercial Internet service providers (ISPs) emerged in the early 1990s when the NSF removed its restrictions against commercial traffic on the NSFNET. Among them were Performance Systems International (PSI), UUNET, MCI, and Sprintlink. (The first public dial-up ISP was actually The World, whose URL was www.world.std.com.) In the mid-1990s, commercial online networks such as AOL, CompuServe, and Prodigy provided gateways to the Internet to subscribers. Later in the decade, Internet deployment grew exponentially, with personal Internet accounts proliferating by the tens of millions around the world, new technologies and services developing, and new paradigms evolving for the economy and business. It’s almost too early to write about these things with suitable perspective—maybe I’ll wait until the next edition.

Many Internet technologies and protocols have come and gone quickly. Archie, an FTP search engine developed in 1990, is hardly used today. The WAIS protocol for indexing, storing, and retrieving full-text documents, which was developed in 1991, has been eclipsed by Web search technologies. Gopher, which was created in 1991, grew to a worldwide collection of interconnected file systems, but most Gopher servers have been turned off. Veronica, the Gopher search tool developed in 1992, is obviously obsolete as well. Jughead later supplemented Veronica but has also become obsolete. (There was never a Betty.)

The most obvious success story among Internet protocols has been HTTP, which, together with HTML and the system of URLs for addressing, has formed the basis of the Web. Timothy Berners-Lee and his colleagues created the first Web server (whose fully qualified DNS name was info.cern.ch) and Web browser software using the NeXT computing platform that was developed by Apple pioneer Steve Jobs. This software was ported to other platforms, and by the end of the century more than 2 million registered Web servers were running.

Lynx, a text-based Web browser, was developed in 1992, and I personally know that it was still used in some rural areas with slow Internet connections as late as 1996. Mosaic, the first graphical Web browser, was developed in 1993 by Marc Andreessen for the UNIX X Windows platform while he was a student at the National Center for Supercomputing Applications (NCSA). At that time, there were only about 50 known Web servers, and HTTP traffic amounted to only about 0.1 percent of the Internet’s traffic. Andreessen left school to start Netscape Communications, which released its first version of Netscape Navigator in 1994. Microsoft Internet Explorer 2 for Windows 95 was released in 1995 and rapidly became Netscape Navigator’s main competition. In 1995, Bill Gates announced Microsoft’s wide-ranging commitment to support and enhance all aspects of Internet technologies through innovations in the Windows platform, culminating in 1998 in Internet Explorer being completely integrated into the Windows 98 operating system. Another initiative in this direction was Microsoft’s announcement in 1996 of its ActiveX technologies, a set of tools for active content such as animation and multimedia for the Internet and the PC.

In wireless telecommunications, the work of the TIA resulted in 1991 in the first standard for digital cellular communication, the TDMA Interim Standard 54 (IS-54). Digital cellular was badly needed because the analog cellular subscriber market in the United States had grown to 10 million subscribers in 1992 and 25 million subscribers in 1995. The first tests of this technology, based on Time Division Multiple Access (TDMA) technology, took place in Dallas, Texas, and in Sweden, and were a success. This standard was revised in 1994 as TDMA IS-136, which is commonly referred to as Digital Advanced Mobile Phone Service (D-AMPS).

Meanwhile, two competing digital cellular standards also appeared. The first was the CDMA IS-95 standard for CDMA cellular systems based on spread spectrum technologies, which was first proposed by QUALCOMM in the late 1980s and was standardized by the TIA as IS-95 in 1993. Standards preceded implementation, however; it wasn’t until 1996 that the first commercial CDMA cellular systems were rolled out.

The second system was the GSM standard developed in Europe. (GSM originally stood for Groupe Spéciale Mobile.) GSM was first envisioned in the 1980s as part of the movement to unify the European economy, and the final air interface was determined in 1987 by the European Telecommunications Standards Institute (ETSI). Phase 1 of GSM deployment began in Europe in 1991. Since then, GSM has become the predominant system for cellular communication in over 60 countries in Europe, Asia, Australia, Africa, and South America, with over 135 mobile networks implemented. However, GSM implementation in the United States did not begin until 1995.

In the United States, the FCC began auctioning off portions of the 1900-MHz frequency band in 1994. Thus began the development of the higher-frequency Personal Communications System (PCS) cellular phone technologies, which were first commercially deployed in the United States in 1996.

Establishment of worldwide networking and communication standards continued apace in the 1990s. For example, in 1996 the Unicode character set, a character set that can represent any language of the world in 16-bit characters, was created, and it has since been adopted by all major operating system vendors.

In client/server networking, Novell in 1994 introduced Novell NetWare 4, which included the new Novell Directory Services (NDS), then called NetWare Directory Services. NDS offered a powerful tool for managing hierarchically organized systems of network file and print resources and for managing security elements such as users and groups.

In other developments, the U.S. Air Force launched the twenty-fourth satellite of the Global Positioning System (GPS) constellation in 1994, making possible precise terrestrial positioning using handheld satellite communication systems. RealNetworks released its first software in 1995, the same year that Sun Microsystems announced the Java programming language, which has grown in a few short years to rival C/C++ in popularity for developing distributed applications. Amazon.com was launched in 1995 and has become a colossus of cyberspace retailing in a few short years. Microsoft WebTV was introduced in 1997 and is beginning to make inroads into the residential Internet market.

Finally, the 1990s were, in a very real sense, the decade of Microsoft Windows. No other technology has had as vast an impact on ordinary computer users as Windows, which brought to homes and workplaces the power of PC computing and the opportunity for client/server computer networking. Version 3 of Microsoft Windows, which was released in 1990, brought dramatic increases in performance and ease of use over earlier versions, and Windows 3.1, released in 1992, quickly became the standard desktop operating system for both corporate and home users. Windows for Workgroups 3.1 quickly followed that same year. It integrated networking and workgroup functionality directly into the Windows operating system, allowing Windows users to use the corporate computer network for sending e-mail, scheduling meetings, sharing files and printers, and performing other collaborative tasks. In fact, it was Windows for Workgroups that brought the power of computer networks from the back room to users’ desktops, allowing them to perform tasks previously only possible for network administrators.

In 1992, Microsoft released the first beta version of its new 32-bit network operating system, Windows NT. In 1993 came MS-DOS 6, as Microsoft continued to support users of text-based computing environments. That was also the year that Windows NT and Windows for Workgroups 3.11 (the final version of 16-bit Windows) were released. In 1995 came the long-awaited release of Windows 95, a fully integrated 32-bit desktop operating system designed to replace MS-DOS, Windows 3.1, and Windows for Workgroups 3.11 as the mainstream desktop operating system for personal computing. Following in 1996 was Windows NT 4, which included enhanced networking services and a new Windows 95–style user interface. Windows 95 was superseded by Windows 98, which included full integration of Web services.

And finally, at the turn of the millennium came the long-anticipated successor to Windows NT, the Windows 2000 family of operating systems, which includes Windows 2000 Professional, Windows 2000 Server, Windows 2000 Advanced Server, and the soon-to-be-released Windows 2000 Datacenter Server. Together with Windows CE and embedded Windows NT, the Windows family has grown to encompass the full range of networking technologies, from embedded devices and personal digital assistants (PDAs) to desktop and laptop computers to heavy duty servers running the most advanced, powerful, scalable, business-critical enterprise-class applications.

Share.

About Author

Leave A Reply