"That which can be digitized, will be digitized" (Tanguay, 1997). It appears that Tanguay's prediction is becoming a reality judging by the number of computer products appearing on the market and the incorporation of the computer into the education system. In fact, Tanguay predicts the "Virtual English Classroom" by the year 2001, in which students visit virtual cities and locations of the world through the Internet, and experience realistic encounters.
This prediction may be somewhat ambitious, but undoubtedly the computer has become a prevalent part of the education system. Many courses are being placed on-line, for the simple reason that it is more efficient. Once in place, the course can be used again and again, is easy to revise, and there is not such a strict regulation of the number who may register. However, there are serious problems with some of these courses, which have yet to be overcome.
English as a Second Language (ESL) and English as a Foreign Language (EFL) instruction has also begun to implement the Internet, although for different purposes. The Internet is useful to ESL and EFL because of the ease of exchanging information, and most importantly, the ease of importing authentic language materials into the class. Innumerable resources are available in one place--at the click of a button--eliminating much of the need for teachers to search for, scrounge and save materials.
The Internet is
Software programs are continuously running on the host, making it a server for different applications. For example, the FTP (File Transfer Protocol) server program is always running on the host machine, and making information available, via FTP, to the client machine. The client must also have the appropriate software to request and receive the information. In this case it would be a program such as WS_FTP or CuteFTP.
The host computer has numbered ports, each port corresponding to a service. Many ports exist, but there are seven main ones: electronic mail, FTP, Telnet, Usenet news, Gopher, WAIS (Wide Area Information Services), and the World Wide Web.
The words host and server are often used interchangeably, although this is not technically correct. Consider an analogy adapted from Engst, Low and Orchard (1995, p. 26). Your computer (client) is in a restaurant waiting to be served. Finally, your computer is seated in the section he wants--perhaps http or FTP (the port). The waiter (server) attending this section takes your order. The waiter then goes to the kitchen (host) where all the food (files) are stored, gets the order, and brings it to your computer. In summary, the server performs the task, and the host stores all the files.
To request information, you must know where the information is located. A Uniform Resource Locator (URL) gives the exact location of a directory or file. For example, take the URL http://www.quasar.ualberta.ca/~apaton/homepage.html The http:// indicates that the requested information is a hyper text document. The next part between the slashes is the Internet address. The next portion, ~apaton, is the path. The tilde tells the host to search until it finds the directory apaton, and eliminates the need to type the entire path. The final part, homepage.html, is the actual file name.
Each host machine has an Internet address consisting of four numbers, each less than 256, and separated by periods. This is certainly not an easy thing to remember, so a Domain Name Server program was developed which would translate between the words now used in addresses and the numbers computers use. The corresponding domain style address can have between two and five words, again separated by periods. Each word between the periods is called a domain.
Moving from right to left, each component of the domain address gives a more specific location. For the domain address quasar.ualberta.ca, the component ca is the top-level domain, and the most general. There are a limited number of general domains used:
It cannot be determined precisely when the Internet began because it evolved from a series of events and a string of technological advances. A common misconception is that the formation of the Internet was in response to the threat of nuclear war during the Cold War. As a security measure it was desirable to have a medium through which information would reach its destination even if sections of the path were disrupted. This was not exactly the case, as is described by the key developers of the Internet (Barry M. Leiner, Vinton G. Cerf, David D. Clark, Robert E. Kahn, Leonard Kleinrock, Daniel C. Lynch, Jon Postel, Larry G. Roberts, & Stephen Wolff, 1998)
The first paper on packet switching theory was published at the Michigan Institute of Technology (MIT) in 1961 by Leonard Kleinrock. Packet switching theory was certainly not proposed for the forementioned purposes, but rather was developed along a scientific line, to efficiently exchange information between institutions. RAND (Research and Development) also developed the packet switching theory in 1964 for national defence purposes, however, this took place independent of MIT's research.
In 1968, Thomas Merrill and Lawrence Roberts connected two computers through the phone line for the Advanced Research Projects Agency (ARPA) at MIT, and the first message was successfully sent. UCLA was chosen to be the first node on ARPANET, and Stanford Research Institute (SRI) the second. By the end of 1969, two more nodes had been added--UC Santa Barbara and the University of Utah--to make a total of four hosts. More hosts were quickly added, and this original configuration grew into the network that we know today as the Internet.
It has had a few major changes since the first message was sent, though. In 1972, Network Control Protocol (NCP) was implemented for ARPANET so users could develop their own applications. Also, in 1972, the concept of the "open architect network" was incorporated. This meant that individual networks (LANs) could be designed and function independently, according to the needs of the users. In 1973, the Domain Name System was started, which provided hierarchical host names in order to accommodate the growing number of hosts. In 1982, a new protocol was introduced, called Transmission Control Protocol/Internet Protocol (TCP/IP), to accommodate the open architect network. Thus, the Internet was now equipped to handle many different, and even new, applications (Leiner et al., 1998).
In 1979, Usenet (User's Network) was created between Duke University and University of North Carolina (UNC). BITNET (Because It's Time Network) soon followed in 1981. BITNET was begun as a cooperative network at the City University of New York, in order to provide information by mailing lists through electronic mail (Zakon, 1997).
Many more events occurred to make the Internet what it is today. ARPANET ceased to exist in 1990. In 1991, WAIS, which is a set of databases containing information on numerous topics was released (Engst, Low & Orchard, 1995). In the same year, the University of Minnesota released Gopher, which is a menu-based search of Internet information. In 1992, CERN released the World Wide Web. (CERN does not stand for anything, although it was once an acronym [Engst, Low & Orchard, 1995].) Agencies, governments and educational institutions began connecting to the Internet in 1993, then communities followed in 1994. By 1995, network providers were appearing, and Internet-related companies, such as Netscape, went public (Zakon, 1997). From that point on, the Internet has been rapidly expanding, and new services, software and hardware have been coming on the market at an astonishing speed.
The Internet has been growing at an exponential rate. Each year the number of hosts approximately doubles. In 1992, the number of hosts broke 1,000,000 and by January 1, 1998, the count was 29,670,000 hosts and 2,450,000 web sites (Internet Valley, Inc., 1998). In 1979, there were only 3 Usenet sites, and in 1994 there were about 190,000 sites (Zakon, 1997). Today, there are undoubtedly many more Usenet sites, and thousands of newsgroups in existence. Unfortunately, more recent figures than these were not available to me.
This is but a brief and terribly simplified overview
of the Internet, but sufficient for the needs of most teachers wishing
to use it. I felt it important to include this, because (a) it makes
the workings of the Internet less of a mystery, (b) students may ask questions
you should know how to answer, and (c) in my opinion, it is important to
know something about the tools you use to teach. For a more comprehensive
look at using the Internet, I suggest visiting the site Zen and the
Art of the Internet (http://www.iprolink.co.nz/zen-1.0/zen-1.0_toc.html)
or http://sunland.gsfc.nasa.gov/info/guide/Top.html)
There are many links to excellent in-depth articles at Internet: Historia
y Origenes Remotos (http://www.reuna.cl/consorcio/Internet/Rhistori.html)
The site is written in Spanish, but the majority of the articles linked
to it are in English. Another good site to visit has the transcripts
to the video Understanding the Internet, and is located at http://www.pbs.org/uti/transcripts.html
Table of Contents | Top of Page | Next Page |