Last time we discussed the birth and growth of personal computers for the consumer. By 1987 it was clear that, in spite of IBM’s major fumbles with betting on old processors, their own OS/2 operating system, and MCA architecture, that IBM PC and PC clone architecture was going to be around for a while. However, it was by no means clear that the “other guys”—Apple, Atari, Commodore, and others—were doomed. IBM clearly ruled the business world because of solid, if overpriced and boring, machines, combined with excellent software support in the form of WordPerfect (a tremendously successful word processing program that clobbered Word Star and Microsoft’s Word), Lotus 1-2-3 (an equally successful spreadsheet program), dBase (a database program), and similar programs. However, most of these programs ran under DOS and were text-based. If one wanted to play games, or use graphics, or compose music, an Apple or a Commodore Amiga was a better choice unless one had hundreds or thousands of dollars for specialty PC hardware and software. However, by the mid- to late 1990’s, many of these smaller computer companies were failing to exploit their advantage over IBM; Apple was having internal political problems, while Atari and Commodore seem to fail to invest in marketing and/or development of newer technologies.
Bill Gates was not stupid. He began developing his own graphical user interface (GUI) called “Windows,” using at least some elements that he may have “borrowed” from Apple through a later disputed licensing agreement. However, there was a problem: it simply did not work well, at least not at first. Early versions of Windows were launched in 1983, 1985, and 1987, but were generally ignored by consumers and programmers alike.
It would easy to say, as some do, that the reason was due to Microsoft’s poor programming ability. This would probably unfair. It would be better to say that Bill Gates (and Windows) was limited by the hardware that IBM had chosen to build their system around. While Atari, Apple, and Commodore had built graphics chips into their motherboards, IBM had not. In addition, the Intel 8088, 8086, and 80286 chips could not multitask very well, further limiting what Gates could do. (Multitasking allows several programs to run at once in the computer, and if one crashes the others should not.) The change, however, came in 1987 with the introduction of the Intel 80386 chip, the one that IBM did not fully appreciate. It was built with multitasking and other features built-in. Further, IBM and Microsoft were parting company over the OS/2 mess, allowing Gates to really push development of Windows.
In 1990-1991, Microsoft shipped Windows 3.0, which had a graphical user interface, allowed for better memory usage, and much more usable graphics. It also worked, at least most of time. A year later Windows 3.1 shipped, which crashed less often. It sold a million copies within two months of release.
But while Windows 3.1 looked nice and ran nicer-looking versions of WordPerfect, Word, Excel, and other programs, IBM PC hardware had also progressed. In 1990, a computer could be considered “multimedia ready” if it had at least a 16MHz 80386SX processor, 2MB of RAM, a 30MB hard drive, a 256-color VGA monitor, Window 3.0 or better, sound card, and a CD-ROM that took up no more than 40% of the CPU power when running(!). This was made possible by advances in graphics cards and the introduction, in 1988, of the SoundBlaster sound card—probably the first widely available and affordable soundcard for the PC.
However, one of the things that really pushed the IBM-clone multimedia market was…GAMES! Castle Wolfenstein 3D (1991) by id Software was the first game to put the player in the shoes of a game character, seeing through that character’s eyes. While Wolfenstein could run (sort of!) on an 80286 computer, it ran better on higher-end units with soundcards. The game grossed $3 million dollars its first year. Id Software, not being stupid, released Doom in 1993. The same year UbiSoft released Myst. These games, which sold millions of copies, demonstrated that the PC could also be a game machine. It also inspired thousands of people to upgrade their CPU’s, video cards, memory chips, and sound cards—something that they would not have needed to do at all if they had just been typing business reports! Note, too, that the gamer market appears to have been largely unanticipated by most of the IBM PC clone-makers.
Another surprise driver of computers was e-mail. Prior to 1991 the NSF and DARPA generally limited e-mail access to research and educational groups. Individuals and companies wishing e-mail-like communication would have to set up their own systems, and many did: FIDONET was one national system run by hobbyists from about 1984 to 1990, and Control Data’s PLATO system was widely used from the mid-1970’s through much of the 1980’s by a number of universities. However, in 1991, NSF’s restriction against private access was officially dropped, allowing private companies to use ARPANET (now called NSFnet). Many people then set up small internet service providers, but they were mainly catering to the computer-literate. However, once again graphics and ease-of-use appeared as a major force, in the form of an unappreciated Apple spin-off called America Online. AOL initially catered to Apple II and McIntosh users as early as 1989, and a few years later was offering a version that worked with Microsoft Windows. AOL’s graphics-laden program was widely mocked by “real” computer programmers and aficionados…but AOL ended up laughing all the way to the bank as demand for their service exploded.
1993 500,000 members
1994 1 million
4.5 million
10 million
15 million
20 million
25 million
(Source: http://www.corp.aol.com/whoweare/history.html)
This, however, leads us to the other major innovation: the advent of the public internet.
Of course, there was another force at work as well: the internet. The internet began as a secret program of the Defense Advanced Research Projects Agency (DARPA) during the 1960’s. The reason that DARPA was interested in a computer network was for data communications in the event of nuclear war. Key to this system was its seeming random multiple links between various sites—this way, in case several cities were destroyed, there would be alterative routes for data traffic. By 1972 forty nodes of ARPANET had been built. To supply data to test the system, research universities (who could generate tons of data) were allowed to use the system to share their data with other universities. Soon people figured out how to send personal messages (e-mail) over the system, something the system had not been designed to do. Eventually TCP/IP is developed around the idea that a common protocol should be developed to allow different brands and types of computers to access the computer network. In 1984 DNS is introduced to hide the hard-to-remember TCP/IP addresses and make them more user friendly (207.46.245.156 becomes indexed to microsoft.com for example).
Around 1991 DARPA wound down its participation in the network and turned major operations over to the National Science Foundation, with contributions also coming from NASA and the Department of Energy. This is also when the national “Information Superhighway” idea was floated, and funded, by Bill Clinton and Al Gore with the “High Performance Computing Act” of 1991.
Okay, so now we have some sort of national computer network up and running, and people are using e-mail and research data. Files are moving around with network services like FTP (File Transfer Protocol), Usenet (“newsgroups”), and others. Universities are also experimenting with making libraries of information available via systems like Gopher, VERONICA, Minerva, and other. However, to use most of these, you need to have some idea of what you are looking for when you start…and most everything is either in the form of text or “ASCII art.” What is missing is a web server and a browser.
The concept of a web came from a man named Tim Berners-Lee, working at the European Center for High Energy Physics (CERN) in Geneva, Switzerland. Around 1989 he was thinking about the way people think and read. Often they don’t really pick up a book and read it cover-to-cover; instead, they flip around like they do with magazine. He became interested in the notion of hypertext (a concept which had been around since 1965, when the idea was invented by Ted Nelson), where a string of words might be highlighted and linked to another document. Within two years he had coded a simple web browser and had also coined the term “World Wide Web,” which CERN made publicly available for free.
A group of researchers at the National Center for Supercomputing Applications at the University of Illinois heard about the web project. One of the them was Marc Andreessen. According to reports he was actually not much of a programmer; instead he was more interested in starting a business of some sort. However, he did have a friend, Eric Bina, who was very good at coding. Together they produced the Mosaic web browser, with Bina doing the actual programming and Andreessen doing promotion and providing customer support and bug tracking. (Hmmm… where have we seen this pattern before…?) The browser works pretty darn well. Andreessen gets most of the credit for the product, even though he didn’t really write much of the code.
What happens next is a little confusing. Around 1993 Andreessen teamed up with Jim Clark, one of the founders of Silicon Graphics. They start Mosaic Communications, change the name to Netscape, rewrite the code for the browser, and hire most of the people who had worked on the project at NCSA. NCSA was annoyed by all the people jumping ship but didn’t really have an interest in pursuing the matter. Thus, they licensed their web technology to a company named Spyglass. This marks the start of the browser wars, with two small companies fighting it out…because larger companies, including Microsoft, think that the web is an academic toy. After all, how can you make money off a program that you give away for free, right? Microsoft licenses the product from Spyglass, but does pay much attention to it. After all, in 1994 there are only about 3000 websites.
In 1995 Netscape has the third largest IPO offering in history. Even more importantly, proprietary ISP’s like AOL, Compserve, and Prodigy begin offering gateways to the internet and World Wide Web, allowing their millions of users to “surf the internet” for the first time—to look at one of some 25,000 websites! In 1996, Bill Gates realized that by dismissing the Web as a toy, he had made an even worse mistake than introducing “Microsoft Bob.” But how could he catch up, when Netscape is the browser of choice and pulling ahead, and Microsoft only licenses its browser from Spyglass?
Bill’s brutally effective solution: buy Spyglass, a company with about 20 programmers—then assign 1000 more programmers to “help!” The Spyglass/Microsoft browser improves rapidly. Also, Gates starts bundling Internet Explorer with Windows 95 and its successors (Windows 98, ME, 2000, etc.), as well as including fairly good web server programs with Windows NT. By 1999, MSIE takes the lead over Netscape—helped in part by Netscape’s being bought out (and mismanaged) by AOL, and also by Netscape’s attempt to sell browser and server programs (for about $30 for the browser and $300 to $1300 for server software) when Microsoft was giving both away for free.
So…where are we now? Currently Microsoft’s Internet Explorer program rules the web browser world on most personal computers. Netscape/Mozilla is a niche player—though it is the browser of choice on Linux machines, and is now given away for free. We see the same thing with web servers, which send the data to the browser: the choice is Microsoft’s IIS (bundled with NT and later server operating systems), or Apache, which runs on Unix and Linux machines and is free. Here it appears that Apache has the lead over Microsoft. (Netscape’s server software has almost vanished.)
Attention has recently turned more to computer operating systems, with Linux challenging Microsoft’s Windows products. So far, however, these have been niche players for the consumer market, though competitive for servers. More attention is now directed at the “search engine wars,” with Google currently dominant. However Yahoo! and Microsoft would also each like to dominate this arena, and there are many smaller players as well, such as Alta Vista, Lycos, and Harvest.
Of course, there is the a big question: What is the World Wide Web really good for, anyway?
Wasting time! For both computer hobbyists and non-computer people web surfing, window shopping, and other activities can consume large amounts of time.
Gaming (another form of time-wasting?), now with interactive games so that people across the net can play games with each other. The MUD (“Multi-User Dungeon”) was an early form of this.
Communication, mainly via email and websites but more recently through instant messaging and weblogs (including “blogs” such as Live Journal). Other older modes include newsgroups and IRC (“Internet Relay Chat”). The web is also a favorite medium for advocates of odd causes and interests to promote themselves, as it costs next to nothing to create a website potentially visible to millions of people.
One very powerful early model was “B2B,” where businesses would use the internet for wholesale buying and selling. While this received a tremendous amount of attention at the time, it did not work as well as many hoped.
B2B was replaced by “B2C,” or Business to Consumer. The results here have also been uneven, with unexpected winners and losers. The biggest hits here appear to be:
Ebay (whose amazing success with online auctions was completely unexpected)
Financial services (banking, stock trading, ValueLine and other financial information services, stock quotes, etc.)
“Vice” services such as online gambling and especially online pornography.
Online dating services also appear to on the rise, with services like match.com, Yahoo! personals, and eHarmony growing in popularity.
Finally, there are still a few weirdos out there who use the web for library and academic research… what the web was originally intended for in the first place! “Online learning” is also a rapidly growing field, and is viewed by many as having great potential for both students and also educational institutions. Students, especially working students, gain opportunities that they could not formerly access because of work schedules and distance, while schools can also market their services to people who would not be able to attend otherwise. However, this area is still problematic as schools figure out the best ways to teach online, and as accrediting bodies try to figure out how best to grant credit for online classes. Both students and schools, who initially thought that online learning would be a quick way to make money, have been surprised at how difficult it is to make systems work. Completely online degree programs continue to be somewhat rare as a result, and many students have trouble making the time to finish their degrees.
This is not to say that the Great Internet Gold Rush has been without bumps and setbacks. Here are some of the more notable downsides that have accompanied the growth of the internet and Web:
The most recent was the “Dot Bomb” fiasco of 1999, when many of the internet and computer companies overbuilt on credit. When the bottom fell out, many newly-minted computer professionals—and even some old-timers—found themselves laid off from highly-paid jobs.
Academic cheating (Grrrr!!!) has become a greater problem. Many students, either through gross ignorance or laziness, have taken cutting and pasting material from websites for papers. Other have taken to purchasing term papers from ‘termpapers.com’ sites. While these are not new problems, ready access to the ‘net appears to have increased the number of occurrences.
A similar problem has been copyright violation, as people upload and download music (especially MP3’s) via networks such as Napster (now shut down from its original form), Kazaa, LimeWire, and others. Many are concerned that copying of DVD movies will be next.
Personal cheating may be on the rise as well, as people in unhappy relationships find it easier to contact other persons in the same situation, as shown in the recent movie “You Have Mail.”
Many, especially parents, are concerned that the internet may be an avenue for predators, such as child molesters, to contact children and lure them off. Others are concerned that the web offers children easy access to violence and pornography.
One of the greatest potential problems may be data-mining by either government or industry. In 2002 and 2003 the U.S. government found itself embroiled a controversy when DARPA (the same people who sponsored the core of the internet) began a program called “Total Information Awareness.” The goal of this program was to create a computer system that would automatically monitor credit card transactions, internet transactions, e-mails, telephone calls, airline reservations, and medical and police records. The goal would be to look for patterns of suspicious activity to prevent terrorist activity in the U.S. Congress defunded the program in 2003 because of public concerns about privacy and who would have access to the information. However, research is almost certainly being quietly continued both by the U.S. government (see http://www.csmonitor.com/2004/0223/dailyUpdate.html?s=entt ), and also by private companies. (Read the user license for the Google Toolbar closely, for example!). The trick will be to balance individual privacy with the needs of national security.