Documentation
Click on an article name
windows 2000 server
active directory
linux
operating systems
wan
networking
database
ITT capstone project

Windows 2000 Server

Windows 2000 Professional, Windows 2000 Server, Windows 2000 Advanced Server and Windows 2000 Datacenter Server are included in the Windows 2000 family of products. These products feature server management, security, file systems, networking, multiprocessing, clustering and user interface. Active Directory is also featured in this family of products, although not in Windows 2000 Professional. Windows 2000 Server offers easier manageability, greater reliability and support for new devices. The four layers of Windows 2000 architecture is the Hardware Architecture Layer or HAL, the Kernel layer, System Services and Environment Subsystems.

Windows 2000 operates in Kernel Mode and User Mode. Several decisions must be made prior to installing Windows 2000. These decisions include choosing the type of network, disk partitions, fie system, licensing, hardware requirements, network details, type of installation, installation media and Windows 2000 components to be installed. A program called Setup is used to perform a Windows 2000 Server installation. Windows 2000 Server can be installed as an upgrade from an earlier Microsoft operating system, an attended installation or an unattended installation. The Active Directory feature of Windows 2000 Server stores resources as objects. It has both a logical and physical structure. The logical structure consists of domains, trees, forests, organization units and global catalog. The physical structure consists of domain controllers and sites.

Disks and File Systems

A hard disk includes tracks, sectors and clusters. Two types of disks found in Windows 2000 are basic and dynamic. Additional disks can be added and hard disks can be removed. Basic disks contain basic volumes and dynamic disks contain dynamic volumes. A basic disk can be converted into a dynamic disk. Dynamic volumes can be created from a dynamic disk. Dynamic volumes may be simple volumes and spanned volumes. Three types of spanned volumes are Raid-5 volumes, Mirrored Raid-1 volumes and Striped Raid-0 volumes.

File systems supported by Windows 2000 include FAT, FAT32, NTFS 4.0 and NTFS 5.0. The NTFS 5.0 file structure includes a Windows 2000 boot sector, a Windows 2000 Master File Table, Windows 2000 Master File Table attributes and NTFS volumes. Careful thought must be made in upgrading to Windows 2000 from earlier operating systems and multi-booting with Windows 2000 because issues such as hardware compatibility and NTFS implementation may arise. Security permissions are a feature of the NTFS file system used to control user actions on file and folder objects. NTFS offers Windows 2000 a set of predefined permissions.

Disk quotas are another feature of the NFS file system. This feature allows limitations to be placed on disk space used and can be enabled on NTFS volumes. FAT volumes do not feature disk quotas. NTFS also features folder and file encryption and compression. Compression allows data to occupy less space than in uncompressed volumes. In Windows 2000, compression is done using the Compact utility or through the file or folder properties sheet. A public key-based cryptographic scheme is used by the Encrypting File System to encrypt and decrypt files or folders. Windows 2000 Server also features Distributed File System and File Replication Service. Distributed File system combines file systems that are spread over a network into a single file system. The File Replication Service distributes multiple synchronized copies of data to multiple servers on a Windows 2000 network.

Hardware Devices

Computer hardware includes all physical parts that can be touched. Internal and external hardware exists. In Windows 2000, hardware is installed and uninstalled using the Add/Remove Hardware Wizard. This wizard is used to add and troubleshoot devices as well as uninstall and unplug devices. Windows 2000 also features the Plug and Play technology. This feature is used to automatically detect and configure devices. In Windows 2000, hardware devices are managed through the Add/Remove Hardware Wizard, Device Manager snap-in, driver signing, hardware profiles and service packs. Device Manager is used to obtain detailed information about installed hardware and troubleshoot problems with devices.

Driver signing is a feature of Windows 2000 that allows unsigned drivers to always be installed or never be installed. It can also be set to prompt the user at installation time to allow or disallow driver installation. At startup, Windows 2000 loads drivers for devices according to a specified hardware profile. Microsoft supplies service packs that include a collection of patches, bug fixes and minor upgrades. A UPS is a power source used to continue working on a computer even through a power outage. It uses a serial port link to communicate notification messages to computer. The Power Options dialog box is used to install a UPS.

Network Adapters

A network adapter is a device providing connectivity between a computer and the network. Some network adapters are Plug and Play. Windows 2000 automatically detects this type of network adapter and loads the driver for it. Device Manager is used to verify that the network adapter is installed correctly. The Properties sheet of the network adapter is used to configure the settings such as performance tuning. Additional information on the network adapter can also be obtained from this properties sheet such as if the adapter is working properly.

A network adapter can be disabled rather than removed. However, prior to disabling the network adapter drivers should be uninstalled. The protocols, the network connectivity services and the network clients that must be installed to enable network connections are set through the Local Area Connection properties.

User Accounts and Groups

A user accounts requires a user name and a password. Local user accounts, domain user accounts and built-in user accounts are three types of user accounts in Windows 2000. The Local Users and Group snap-in is used to create local accounts. The Active Directory User and Computers snap-in on a domain controller is used to create domain accounts. A user account can be searched for and located in Active Directory by its set of attributes. Managing user accounts involve resetting the password, disabling or enabling user accounts, deleting user accounts and renaming user accounts.

A user profile maintains customized user settings. It is applied when a user log in to the computer. Three types of user profiles are roaming user profiles, mandatory user profiles and local user profiles. For easier network administration, a collection of user accounts is placed into a group. Domain-level groups are created on Windows 2000 domain controllers. Local groups are created on all Windows 2000 computers other than the domain controllers. Domain-level groups include domain local groups, global groups and universal groups.

Group Policies

Group policies are a set of configuration settings that are applied to a computer hierarchically at start-up. A parent policy is applied before a child policy. However, if both policies clash, the child policy overrides the parent policy. Group policies are configured according to user settings and computer settings. These policies are applied on computers and Active Directory Objects. They are used to enhance, restricting and secure a user environment. Group Policy Objects contain group policy settings. A group policy is created through the Active Directory Users and Computers snap-in or the Active Directory Sites and Services snap-in. Permissions are assigned to a group policy in the same way as they are assigned to files or folders. Permissions assigned to a user or group pertains to actions allowed on a Group Policy Object.

Network Protocols and Services of Windows 2000

Windows 2000 protocols include TCP/IP, NWLink, AppleTalk, NetBEUI, IrDA, Data Link Control and Asynchronous Transfer Mode. A four-layer hierarchy defines the Windows 2000 TCP/IP suite. IP is the Internet protocol. A MAC address is a mandatory, globally unique and permanent address allocated to a network adapter. It is different from the IP address. The DHCP service assigns an IP address and supplies additional configuration information to a requesting computer. A DHCP scope defines a pool of valid IP addresses that can be leased to a client. Three different DHCP scope options are server, scope and client reservations. DNS maps network names to network addresses. WINS maps NetBIOS names to network addresses. Both services provide name resolution for computers. Three types of DNS zones are standard primary, standard secondary and Active Directory-integrated. Windows 2000 Server features dynamic DNS for automatic updates of DNS records. The DNS snap-in and ntslookup are used for troubleshooting the DNS service.

Routing and Remote Access Services in Windows 2000

Routing and Remote Access Server or RRAS provides remote connectivity to users on Windows 2000. Windows 20000 RRAS services include Remote Access Server (RAS), Network Router, Virtual Private Network (VPN) Server and Internet Connection Server. RRAS features Active Directory integration, multiple authentication protocols, Remote Access Policies, tunneling protocols and account lockout policies. Authentication protocols included with the Remote Access Server are PAP, CHAP, MS-CHAP, MS-CHAP V2 AND SPAP. These protocols enhance remote access security. Security is also increased through the use of data encryption, dial-in permissions, account lockout, callback, secure cards and secure hosts. RAS management is done through the RRAS snap-in.

A Virtual Private Network (VPN) connection is created between two Local Area Networks through a public network such as the Internet. A VPN connection can also connect a user to a Local Area Connection. A VPN connection requires the use of a VPN Server, a VPN client, a tunnel, tunneling protocols, tunneled data and a public network. VPN protocols include Point-to-Point Tunneling Protocol (PPTP), Layer Two Tunneling Protocol (L2TP) and IPSec. VPN server installation is done through the RRAS snap-in. Managing VPN involves managing users, managing addresses and managing authentication.

Printing in Windows 2000

Network printing involves remotely accessing a printer across a network from a computer. A network printing environment includes a print server, a print device, a printer and a printer driver. Factors such as the printing load, the operating system for the printer server, the amount of RAM required for the printer server and the amount of disk space required for the print server affect the configurations of such an environment. The Add Printer Wizard is used to install a printer. This printer can be shared to make it available to users on a network. Printer management includes assigning permissions to users, pausing, resuming and canceling print jobs, changing the ownership of a printer, redirecting documents, setting the priority, time and notification for documents, assigning paper trays and setting a separator page. Printers can be administered through a web browser. A printer pool is used to decrease the amount of time required by the print server to service print requests. The priority of a printer can be set to finish print job more quickly.

Server Monitoring and Tuning

Windows 2000 supplies tools and services used to optimize and monitor performance. Such tools include Task Manager, Performance Console, Network Monitor, SNMP, Check Disk utility and Disk Defragmenter utility. System processes are monitored through Task Manager. This tool also displays the resources each process uses. The Applications tab of Task Manager is used to end a task, switch to a task and start a new task. The Processes tab of Task Manager displays the processes that are running on a computer. It is also used to end a process or process tree. Performance Console is used to monitor and log usage of disks, memory, CPU and network. Performance Console includes the System Monitor snap-in and Performance Logs and Alerts snap-in. Network Monitor is used to capture and display network traffic. SNMP is used to monitor, control and manage network devices. The Check Disk utility checks a hard disk for errors and corrects them. The Disk Defragmenter utility regains disk performance by rearranging files to occupy contiguous clusters on the disk. This process is referred to as disk defragmentation.

Security Management

Two types of Windows authentication are Interactive Logon and Network Logon. Windows 2000 authenticates using the Kerberos protocol. This default authentication protocol uses tickets to authenticate users. Data on a digital certificate is used to validate the identity of a user, computer or service. Windows 2000 uses local and non-local security policies applied to different areas. Active Directory Users and Computers is used to configure these security policies and apply them to Group Policy Objects. A security template having predefined security configurations can also be used. The settings on a security template can be reconfigured to suit changing needs. An audit policy is used to track particular events. When auditing is set up on a folder, the files and subfolders within that folder inherit auditing. The Event Viewer snap-in is used to monitor and log events that occur on the computer. Event Viewer includes an Application log, a System log and a Security log.

Backup and Fault Tolerance

Data can be backed up on a Digital Audio Tape (DAT), a Digital Linear Tape, a Magneto Optical disk and a CD-R. Windows 2000 supports a differential backup, an incremental backup and a full backup. The Windows 2000 Backup utility is used to back up data, restore data and create emergency disks. An operating system that is fault tolerant is able to run in spite of error conditions. Fault tolerance is achieved in Windows 2000 using RAID and mirroring. A basic disk must first be converted to a dynamic disk to create RAID and mirrored volumes. Mirroring uses two disks to create a mirrored volume. Two copies of the same data are maintained on a mirrored volume. RAID-5 requires a minimum of three hard disks. Data and parity information is written across all disks. If a single disk in the array fails, parity information is used to recreate the data.

Internet Information Services 5.0

Web publishing involves creating web site content and uploading it to an existing web site. IIS 5.0 is used to publish information on the Internet or intranet. IIS 5.0 or Internet Information Services 5.0 allows a web server, FTP server, SMTP server and NNTP server to be configured. In Windows 2000 Server, IIS 5.0 is installed by default. Setting up a web environment consists of creating web sites, creating home directories, creating virtual directories, redirecting requests, creating virtual sites and using dynamic web pages. Common IIS 5.0 administration tasks include the configuration of site properties, performance properties, ISAPI filters, home directory properties, document properties and directory security properties. WebDAV is used to set up a publishing directory in which multiple developers can change files in one place.

Performance Tuning of IIS 5.0

Identifying and fixing trouble areas in a server is a continual process. Trouble areas include hardware, web applications and the network. Some common problems in IIS 5.0 include access violation errors by inetinfo.exe and HTTP errors. IIS supplies the Web Server Certificate Wizard, the CTL wizard and the Permissions Wizard to help configure security settings. IIS 5.0 security is explained in reference to authentication, access control, encryption, certificates and auditing. The performance of a server is measured by comparing the actual value of related performance counters with their ideal values.

Tools such as System Monitor, Event Viewer, Task Manager and Network Monitor are used to monitor the performance of a server. The Microsoft Lockdown tool is used with IIS to turn off features that are unnecessary for the functioning of the server. Disk Management in Windows 2000 is used to create, extend or mirror a volume. ISAPI is a set of the Windows keywords used to help write web server applications. Events generated during the processing of web requests are answered by a special ISAPI DLL file called an ISAPI filter. Specific network event-related entries are recorded in log files. IIS supports the W3C Extended Log File Format, the Microsoft IIS Log Format and the NCSA Common Log File Format.

Active Directory

Although Windows 2000 makes use of the same core technologies as Windows NT 4.0, Windows 2000 also has many new features. Based on NT technology, Windows 2000 is a layered and modular operating system. The two major layers of the Windows 2000 architecture are the user mode and kernel mode. Windows 2000 Professional, Windows 2000 Server, Windows 2000 Advanced Server and Windows 2000 Datacenter Server are all part of the Windows 2000 operating system family, sharing essentially the same architecture. The Windows 2000 user mode layer performs the function of an application support layer. Examples of third party software applications this layer supports are the OS/2, POSIX for UNIX and Win32 applications.

The Windows 2000 kernel mode layer has access to the system data and hardware. Components of this layer are executed in an isolated area of memory that has direct access to the memory. In the kernel mode layer, the microkernel is the core of the operating system. This microkernel manages thread scheduling and multitasking. Windows 2000 is a 32-bit operating system. It can address approximately 4GB or 232 bytes of memory. Two gigabytes is used by the operating system. This is known as kernel memory. The other two gigabytes is reserved for applications. This is known as user memory. Seldom-used memory pages are moved to the hard disk for physical memory management by the VMM. To ensure that sufficient memory is supplied to the operating system for its operations, a certain amount of physical memory is reserved for kernel processes. This physical memory is allocated to the kernel in the form of paged pool memory and non-paged pool memory.

Fundamental network operating system functions are supplied by a directory services. The directory services helps in defining and maintaining an organization’s network infrastructure, performing system administration and controlling a user’s overall experience of a company’s information systems. It also helps simplify management by providing a single, consistent point of management for users, applications and devices. A directory service helps strengthen security and provide users with a single point of logging on to access network resources. Tightly coupled with the management and security mechanisms of the operating system, directory services provides the tools for managing security for internal desktop users, remote dial-up users and external e-commerce customers. Active Directory is the directory services for Microsoft Windows 2000 Server.

Active Directory Concepts

Active Directory was introduced as the directory services for Windows 2000. It provides a modeling structure designed to meet the requirements of any organization. This information structure has logical and physical components. A global catalog is a searchable master index with information about all objects in the Active Directory. It enables users to find Active Directory information regardless of the physical location of the data. An Active Directory schema contains definitions and properties of Active Directory objects. These objects are recognized by name. Naming conventions used by Active Directory include distinguished names, relative distinguished names, globally unique identifiers and user principal names. Active Directory includes the benefits of scalability, extensibility, interoperability, security, data replication, locator services and ease of administration. Organizations currently using a Microsoft Exchange Server directory can implement Active Directory by using directory synchronization to populate Active Directory with Exchange Server user attributes and objects

DNS and Active Directory

Active Directory domains in Windows 2000 use the DNS naming conventions. Active Directory resources are located through the use of DNS services. Upon starting up, a domain controller registers its identification and the services it provides with the DNS server database using SRV resource records. These SRV resource records in DNS can then be used to locate Active Directory domain controllers. Active Directory integrated DNS zones feature secure dynamic updates and incremental zone transfers. Active Directory cannot function properly without the DNS infrastructure in place. This is because Active Directory must use DNS services to locate domain controllers. Windows 2000 provides utilities for monitoring and troubleshooting DNS. These tools include nslookup, ipconfig, event viewer and a DNS log.

Implementing Active Directory

Domains and organizational units are the main elements of an Active Directory infrastructure. Additional domains and OUs within a domain can also be created to expand the infrastructure. Active Directory domain name spaces can be designed by geography, organizational structure or a hybrid of both. The Active Directory database stores information about objects such as users, computers, printers and shared folders in a centralized location in the network. A completed installation of Active Directory may be verified using various methods. After installation, Active Directory can be integrated with a DNS zone so that the DNS databases are stored and replicated by Active Directory. Active Directory integrated zones allow SRV records in DNS servers to be automatically updated. The feature of nested groups and universal security groups in Active Directory can only be used in the domain mode is in native mode.

Active Directory Administration Tools

Windows 2000 provides the Microsoft Management Console (MMC) for management purposes. The MMC can be configured and customized according to the responsibilities of the administrator. In addition, Windows 2000 offers a set of pre-configured MMC consoles for viewing and managing Active Directory objects. The Windows 2000 Task Scheduler is used to perform many administrative tasks. Tasks can be run at scheduled internals or unattended when computer resources are idle. The “runas” command is used for executing administrative tasks using the administrator account while logged on as a standard user account. Active Directory can be administered remotely using Windows 2000 Administration Tools. These tools can be installed on any Windows 2000 computer that is not a domain controller to manage Active Directory remotely from that computer.

Understanding Active Directory Schema

The Active Directory schema specifies the structure of the various types of objects that are stored in the Active Directory. The schema also controls the type of objects that can be created in an Active Directory. Definitions of classes and attributes are stored in components of this schema. Structure rules, content rules and syntax rules control how the classes and attributes are used, the kind of values they can hold and the relationship they have with each other. The classes and attributes are treated as Active Directory objects known as schema definition objects. These objects have attributes for themselves.

Schema modifications need to be made when existing class or attributes are not suitable for an organization. Active Directory Service Interface (ADSI) Edit and the Active Directory Schema Editor are used to make these modifications to the Active Directory schema. Because the schema is common for the entire forest and is stored in all domain controllers in the forest, changes made to the schema affect the entire network. Careful considerations must be made before modifying the schema since these modifications have many implications and the changes cannot be reversed.

Managing Operations Masters

Active Directory performs updates to certain objects in a single-master fashion to prevent replication conflicts. This single-master model ensures that only one domain controller in the entire directory is allowed to make updates. These domain controllers are called operations masters. The five different operations master roles defined in Active Directory are the schema master, domain-naming master, primary domain controller (PDC) emulator, relative identifier (RID) master and infrastructure master. Each domain must have an RID master, PDC emulator and infrastructure master. Each forest must have a schema master and a domain-naming master.

For better fault tolerance and performance in an environment with multiple domain controllers, one operations master role can be placed in one domain controller. Doing this minimizes the impact of failure of any domain controller holding a particular operations master role in the network. An operations master role may be transferred or moved from one domain controller to another with the cooperation of the original role holder. A role may also be seized or forcefully moved from an operation master domain controller that has failed to another working domain. The “ntdsutil” command is used for seizing an operations master role.

Creating and Managing Trees and Forests

A tree is a group of domains that share the same schema, configuration and contiguous namespace in a Windows 2000 network. A forest contains one or more sets of trees that do not form a contiguous namespace. Trees and forests are created and used to administer and secure an entire organization. A trust relationship allows users in one domain to access resources in another domain. All objects in all domains of the tree are available to all the other domains in the tree through a trust relationship. The domain containing the resources is the trusting domain. The domain from which users are accessing resources is the trusted domain. A domain trust allows a domain controller in one domain to authenticate users in another domain. Windows 2000 has two-way transitive trusts, one-way transitive trusts and shortcut trusts. The domains in a tree are joined transparently through two-way transitive trust relationships.

Configuring Active Directory

Organizational Units (OU) are used for organizing units within a domain. An OU contains user accounts, groups, computers, printers, applications and file shares. An OU may also contain other organizational units from the same domain. An OU is used for delegating control of part of a domain to a user or a group of users. An OU can also be used to control group policy applications so that it applies to a separate group of users or computers. The Delegation of Control Wizard is used to delegate management of the OU to an individual user or group. The Active Directory Users and Computers snap-in is used for creating organizational units.

Windows 2000 features a database called the Global Catalog that stores information on all Active Directory objects in a tree or forest for easy retrieval. A user can locate any Active Directory object in a forest using one or more attributes of the object via the Global Catalog. Users can find this information regardless of the physical location of the data. Through the Global Catalog, a user logs on to the network by providing the user principal name domain mapping and the universal group membership information to the domain controller that is trying to authenticate the user. The Global Catalog maintains all universal group membership in the forest. If the Global Catalog is unavailable, the user gets a local logon with cached credentials.

User and Group Account Administration

Windows 2000 features local user accounts for peer-to-peer networking, domain user accounts for accessing domain resources and built-in user accounts such as guest and administrator. Domain accounts exist in Active Directory. A user account is the unique credentials of a user. User accounts and computer accounts are created to give users a means to log on to the network and access resources. The bulk import process is used to create multiple users and populate the user properties. The csvde and ldifde utilities are used to bulk import user accounts. These utilities require a text file containing information about the user accounts to be created. In addition to user objects for each network user, Active Directory contains computer objects for each computer in the domain.

Users with common departmental functions or operational roles can be collected into a group. These groups are used for assigning permissions and group policies. Security groups and distribution groups are two types of groups in Windows 2000. Domain local groups, global groups and universal groups in Windows 2000 have different scopes of operation. Administrators can create new groups as well as use the built-in groups to protect the directory objects in the group usage strategy. Groups are used to more effectively manage users and computer objects.

Administering a Group Policy

An incorrect configuration of a user environment is reduced through the use of a group policy. Group policies are also used to centrally manage, configure and control workstations and servers that are part of an Active Directory Domain. Several types of group policy settings include administrative templates, scripts, folder redirection, security settings and software installation. Administrative Templates are registry based and are used to control a user’s environment. Scripts and Settings include the automatic execution of startup scripts, shutdown scripts, logon scripts and logoff scripts. Startup scripts are executed before a user logs on to the computer. After logging on, the logon script is executed. When a user logs off, the logoff script is executed. The shutdown script is executed after the user session has terminated. Folder redirection causes data to be saved to a location on a server. Folders that can be redirected are Application Data, Desktop, My Documents, My Pictures and Start Menu. Software Installation reduces the total cost of ownership by deploying software to remote desktops at the Site, Domain and OU levels.

Two nodes in a group policy are user configurations and computer configurations. Files that make up the various functions that a Group Policy Object (GPO) performs are contained in a Group Policy Template (GPT). The Group Policy Template folder contains the subfolders adm, machine, user and scripts. The rights allowing users to access and modify Group Policy Objects include Per Container rights that allow for the creation and deletion of Group Policy Container Objects, Per Container rights granting permissions to read and write the gpLink attributes and Per Container rights granting permissions to read and write the gpOptions attribute. Security areas that can be configured for computers are account policies, local policies, event log, restricted groups, system services, registry, file system, public key policies and IP security policies.

Securing Network Resources

Windows 2000 offers the feature of shared folders for providing access to data folders, user home folders and network application folders. Three types of shared folder permissions in Windows 2000 are read, change and full control. Four ways to gain access to a shared folder on another computer include mapping to a network drive using the Map Network Drive wizard, adding a network place using the Add Network Place wizard, connecting using the Run command and connecting using My Network Places. In addition to share permissions, Windows 2000 provides NTFS permissions to control access a user, group or application has to individual files and folders. Share permissions security is only effective when a file is accessed over a network. NTFS permissions security is effective when a file is accessed interactively at a computer or over a network.

NTFS folder permissions are list folder contents, read, write, read & execute, modify and full control. NTFS file permissions are read, write, read & execute, modify and full control. When combining shared folder permissions and NTFS permissions, the most restrictive permission is always the effective permission. Distributed File System (DFS) provides users with easy navigation to the shared folders they need through a hierarchical file structure. Through dfs, administrators can simplify network administration by logically organizing resources and optimizing access to resources. Dfs maintains replicas of shared files on other servers to provide fault tolerance. Two types of Distributed File Systems are standalone dfs and fault tolerant dfs.

Publishing Resources and Delegating Administrative Control

Information that must be easily accessible to users should be published in Active Directory. A resource is published when an object is created in the directory to make it visible in Active Directory and searchable by LDAP queries. Shared folders are resources that are frequently accessed by network users. Thus, publishing shared folders makes it easy for users to locate them. Published shared folders can be easily searched in the directory using My Network Places. Although all Windows 2000 print servers that are part of a domain automatically publish the printers in Active Directory, clearing the List in the Directory check box after sharing it prevents this action. Non-Windows 2000 print servers do not automatically publish the printers attached to them.

The Active Directory Users and Computers snap-in is used to manually publish the printers after creating and sharing them. If Windows Scripting Host is installed, these printers can also be published using the pubprn.vbs script. When a shared folder is published, Active Directory treats the published object and the shared folder as two different objects. Each of these objects has its DACL. The DACL on the published shared folder is used to control the ability of the user to read the contents of the folder. The DACL on the shared folder is used to control the ability of a user to modify the folder.

A higher administrative authority can delegate specific administrative control for portions of Active Directory to other users or groups. Granular control is provided over objects through the Active Directory security model. This model helps to provide access to the whole object or separately to each of the attributes of the objects. Active Directory provides a standard set of rights allowing users to have access to Active Directory objects. Each object in Active Directory has a unique Security Descriptor. This Security Descriptor contains the access control rights to that object. Access tokens are created for users when they log on to the domain and contain attributes such as user and group SIDs. The access token authenticates the user and contains the permissions assigned to the user on Active Directory objects and network resources.

In Windows 2000, the access control entries (ACE) that are set in the security descriptor of a parent object are propagated from that parent object to corresponding child object. Every Active Directory object has an owner that has full control over the object. By default, the person who creates the object becomes the owner of that object and can control the permissions set on that object. The Delegation of Control Wizard is used to delegate administrative control of objects without manually modifying the access control entries of every object.

Administering A Security Configuration

Security areas configured for group policy are account policies, local policies, event log, restricted groups, system services, registry, file system, public key policies, IP security policies. Text files that are used to configure security in a Windows 2000 system are called security templates. Windows 2000 provides predefined templates that are based on the role of a computer. Common security scenarios of these templates range from low-security domain clients to highly secure domain controllers. The Security Configuration and Analysis tool helps to configure security for a Windows 2000 system, perform periodic analysis of the system to ensure that the configuration has not changed and to make necessary changes according to requirements. The process of tracking both user and Windows 2000 activities is called auditing. Both successful and failed events can be audited. Tracking successful events can be used for resource planning. Tracking failed events alert the administrator of possible security breaches.

Events that can be audited by Windows 2000 include account logon, account management, directory service access, logon events, object access, policy change, privilege use, process tracking and system events. Active Directory objects are audited to track access to them. An audit policy must be configured to audit specific objects. Audit policies are set in the Group Policy snap-in. The Audit Object Access event category is used to audit user access to files and folders. Auditing can be used to track access to sensitive printers. In order for users to perform specific actions such as backing up files and directories they must have the proper rights. User rights in Active Directory can be applied at the domain or OU level or at the local computer.

Active Directory Replication

The process of transferring and managing the Active Directory database between the domain controllers on a network is known as replication. In Windows 2000, the multimaster replication model is used to replicate a read-write copy of the Active Directory database to all the domain controllers on the network. A domain controller initiates the replication process when a new object is created in Active Directory, the properties of an existing object is modified, the name or parent of an existing object is modified or an object is removed from Active Directory. Whenever a change is made in a domain controller database, an originating update is made at the domain controller at which the change was first made.

A replication update is when changes are made to other domain controllers through replication. Whenever a change has occurred, a change notification is sent to all domain controllers informing them that a change has been made and will soon be replicated to their databases. When a domain controller receives this notification, it then sends an update request to the domain controller that has the originating update. The change is then replicated to the requesting domain controllers. The period of time that takes place before the updates made in one database are replicated to all other databases is called replication latency. Replication conflicts caused by multimaster updates include making simultaneous changes to different property values of an object in different domain controllers, simultaneously deleting an object in one domain controller and creating a new object under this object in another domain controller or moving an object into a container in which another domain controller is simultaneously moving another object with the same relative distinguished name, resulting in sibling name conflict.

Active Directory replicates data changes at the property level instead of at the object level. Doing this minimizes replication conflicts. However, conflicts can still arise if two administrators make changes to the same property at two different domain controllers. Active Directory creates a Globally Unique Stamp during an originating update such as an add, move, modify or delete operation. The Globally Unique Stamp resolves conflicts and achieves consistency in the database of domain controllers. Active Directory optimizes replication by update sequence numbers and propagation dampening. Active Directory replicates only the changes and not the entire database every time an update occurs.

Doing this optimizes replication traffic. Each domain controller stores a USN in the property of each object and maintains its values. A USN determines that data that has changed and must be replicated. Domain controllers can find out if replication has already occurred on other domain controllers through a process called propagation dampening. If it is found that replication has already taken place in a domain controller, the data is not replicated again. The order in which a particular domain controller replicates to other domain controllers is controlled through a process called replication topology. The Active Directory database is logically divided into a schema partition, a configuration partition and a domain partition.

Direct replication partners and transitive replication partners are two types of replication partners. A domain controller becomes a direct replication partner when it receives a replication update from another domain controller with the originating update. A domain controller becomes a transitive replication partner when it receives an update from another domain controller with a replicated update. Active Directory uses a process called the Knowledge Consistency Checker (KCC) to establish direct or transitive replication. KCC implements a set of connection objects that dictate the replication topology.

Managing Active Directory Replication

Active Directory updates are shared between domain controllers through replication. Domain controllers that are members of the same domain and are connected by high-speed and low-cost links are grouped together by the site object. The site object helps reduce replication traffic whereas sites are used to optimize replication traffic. A replication topology is implemented through the creation of sites, site links, subnets and site link bridges. Once sites and domain controllers are set up, various operations can be performed to increase the efficiency of replication in the Active Directory environment. The efficiency of inter-site replication can be improved through the creation of connection objects and designating bridgehead servers. Reasons for replication problems include poor network connectivity, unstable server hardware and slow replication.

Setting Up an RIS Server

RIS or Remote Installation Services is the remote operating system installation feature provided by Microsoft Windows 2000. It allows administrators to deploy the Windows 2000 Professional operating system throughout the enterprise without physically visiting each client computer. RIS is an optional component of the Windows 2000 Server operating system. It allows administrators to remotely install the Windows 2000 Professional operating system to workstations on the network. The RIS servers are used to manage and deploy the Windows 2000 Professional operating system to target workstations. RIS can be installed either during the initial installation of Windows 2000 or from the Add/Remove programs option in Control Panel.

During the initial boot phase, RIS client computers connect to the RIS server to download the Windows 2000 Professional operating system from the RIS server. The Windows 2000 Professional operating system is downloaded by the RIS client in the form of an image stored on the RIS server. For RIS to function properly, the DHCP service, DNS service and Active Directory must be available. Also, the BINL, SIS and TFP services must be running on the RIS server. It is recommended that the RIS server run on a Pentium 166 MHz or higher processor, having a 100 Mbps or faster network adapter, with 64 MB RAM or more. It is also recommended that the RIS server have 2 GB of hard disk space with two partitions, one for the operating system and the other for the RIS images, depending on the size and number of images stored on the server. The partition that will hold the RIS images should be formatted with NTFS.

It is recommended that the RIS client run on a Pentium 166 MHz or higher processor with 64 MB RAM, 800 MB of hard disk space, a configurable BIOS and a 100 Mbps network adapter that is either a DHCP PXE-based remote boot ROM v.99c or later or a PCI network adapter supported by an RIS boot disk. RIS images are created and configured through the Remote Installation Services Setup wizard. Two methods of installing clients using RIS are a direct network startup or an RIS boot disk. Once the RIS server is set up, it must be authorized in Active Directory so that the server can be controlled.

The RIS server is authorized using the DHCP management utility. Users must have the permissions to create computer accounts in Active Directory before an operating system image can be installed using RIS. Unauthorized client computers can be prevented from connecting to the RIS server and from obtaining the RIS image by pre-staging the client computer. Pre-staging a computer ensures that the only computer that has attached GUID can use the account, prevents users from connecting unauthorized client computers to the RIS server and from obtaining the RIS image and helps in load balancing.

Managing Active Directory

Every request made to Active Directory such as adding, modifying or deleting an object or attribute is treated as a single transaction. A very important part of maintaining Active Directory is backing up the database through the Backup utility. If Active Directory ever gets corrupted or destroyed due to hardware or software faults, it must be restored. The Ntdsutil utility can be used to move the Active Directory database from one location to another location on the disk, preferably to a bigger partition. Active Directory performance has to be monitored to see how it affects the rest of the Windows 2000 operating system services and components. Some of the tools used to administer Active Directory, known as Active Directory support tools, have to be installed separately.
Linux

Linux, like all operating systems, performs the following functions: interpreting commands from the user, managing processes, allocating memory, managing input/output (I/O) operations and peripherals, and managing files. Linux is a Multi-user system, as opposed to a single-user system like DOS, which means that more than one user can utilize its resources at a time.

The Linux architecture consists of a kernel, shells, utilities, and application programs. The main features and utilities of the Linux operating system are: multiprogramming, time-sharing, multitasking, virtual memory, Samba, Cron scheduler, licensing, and web server. Some of the commonly available shells in Linux are: the Bourne shell, the C shell, the Korn shell, the restricted shell, the Bash shell, the Tesh shell, the A shell, and the Z shell.

You can use the telnet command to connect to a computer running Linux from another computer running a windows operating system. You can use the passwd command to change the password of a user. You can use the exit or logout command to end the current Linux session. The root user, in Linux, has permissions to control, modify, and configure resources.

Installing Red Hat Linux

A protocol is a set of rules for data transfer between two applications or devices. You can create partitions on the hard disk and install Linux on one partition and other operating systems on other partitions.

In Linux you have to mount the file system. By mounting the files system you make it available to the root tree structure. Linux uses a swap partition for virtual memory management. You can install Red Hat Linux form the following media: local CD-ROM, NFS image, FTP, or a hard disk. Red Hat Linux includes three different classes of installations: workstation, server, and custom. LILO is an operating loader that is commonly used to boot Red Hat Linux on Intel platforms. A custom-class installation lets you control partitioning-related issues and the packets that you want to install.

Managing Files and Directories

Files and directories are stored and managed in Linux by using a file system. There are three types of files in Linux: ordinary files, directory files, and special files. There are four types of users in Linux: system administrator or root user, file owners, group owners, and other users.

You can use the pwd command to display the complete path of the current directory. You use the cd command to change the current directory to the specified directory. You use the cd.. Command to move to the parent directory of the current directory. You use the mkdir command to crate directories. You use the rmdir command to remove a specified directory. You use the ls command to display the names of files and subdirectories in a directory. You use the cat command to display the contents of a specified file on the screen. You use the head command to display the number of lines specified from the beginning of a file. You use the tail command to display the number of lines specified from the end of a file. You use the cp command to copy files from one location to another. You use the rm command to delete files or directories. You use the mv command to move files and directories form one location to another. You can also use the mv command to rename files or directories. You use the more and the less commands to display the contents of a file, one screen at a time. You use wildcard characters to perform a set of operations on multiple files. In Linux, you can use the following wildcard characters: * - matches none or one character or a string of characters, ? – matches exactly one character, and [] – matches exactly one of a specified set of characters.

Creating Files Using the VI Editor

An editor is a program used to create the edit text files. Some of the most commonly used editors are: vi, vim, emacs, ed, red, joe, pico. The vi editor can be started using the vi command. The vi editor works in two modes, edit and command. The key is used to switch between the edit mode and the command mode. The emacs editor is another very popular editor available for Linux. The joe editor can be invoked with the joe command.

Managing Documents

You use the find command to locate a file. You can redirect input, output, and errors to a file other than the standard files by using the file descriptors with the redirection symbols, > and <. You can append the output and errors(s) of a command to another file by using the >> symbol. You use the grep filter to search for a particular pattern of characters in the standard input or a file, and display all lines that contain that pattern. You use the wc filter to count the number of lines, words, and characters in a disk file or in the standard input. You use the cut filter to extract specifies columns from the output of certain commands or files. You use the tr filter to translate one set of characters to another. You use the pipe feature in Linux to send the standard output of a command or a user program as the standard input to another command or user program. You use the tee command to write the standard input to the standard output or the file(s).

Securing Files in Linux

File access permissions refer to the permissions associated with a file with respect to the following: the file owner, the group owner, other users. The permissions that can be granted or revoked are symbolically represented by the letters r, w, and x where: r indicates the read permission and can be represented by the number 4, w indicates the write permission and can be represented by the number 2, x indicates the execute permission and can be represented by the number 1. you use the ls-l command to view file access permissions. You use the chmod command to modify file access permissions. You can use the chmod command in the symbolic and absolute modes. You use the umask command to modify the umask value of files and directories.

Automating Tasks Using Shell Scripts

The echo command is used to display messages on the screen. Shell scripts allow you to manipulate variables and use iteration constructs for programming. Comment entries can be included in a shell script by prefixing statements with the # symbol. When a variable is referenced, only the shell that created it is aware of the variable. The export command can be used to pass the parent shell variables to the child shell. Some of the environment variables are HOME, PATH, PS1, PS2, LOGNAME, SHLVL, and SHELL. The grave accent is used in command substitution. The expr command is used to evaluate arithmetic expressions. You can enclose an expression in $((…)) and calculate its value. You can calculate arithmetic expressions by using command substitution.

Using Conditional Execution in Shell Scripts

You can use the test and [] commands to evaluate a condition. You can use the if construct for conditional execution of commands in shell scripts. You can also use the if…elif construct for conditional execution of commands in the shell. You can use the test command with the if construct to test the numeric value of variables by using arithmetic tests. You can use the test command with the if construct to test strings by using string tests. You can use the test command with the if construct to check the status of files by using the file tests. You can use the exit command to terminate the execution of a shell script. You can use the case … esac construct to perform a specific set of instructions, depending on the value of a variable.

Managing Repetitive Tasks Using Sell Scripts

The while, until, and for constructs are used to create shell scripts to perform repetitive tasks. The break command is used to terminate a loop. The continue command is used to generate a new iteration. You can pass the following parameters form the command line to a shell script: $0 through $9, $*, $#. You use the shift command to assign the value of a positional parameter to the previous positional parameter.

Controlling Process Execution

A process can be sent to the background by using the ampersand (&) sign. The process status (ps) command is used to generate a one-line entry for each process that is currently active. The kill command is used to terminate a process. The fg command is used to execute a process in the foreground. The time command is used to determine the time elapsed between the start and end of a command. The crontab utility instructs cron to execute the commands on a specific date and at a specific time. The user names that appear in the cron.allow file will have access to the crom unities, and the user names that appear in the cron.deny file will be denied access to cron. The at utility can schedule tasks to be run only once.

Backing up, Restoring, and Compressing Files

Making backups regularly is necessary to secure data against accidental loss. A good backup should be reliable, easily available, and fast and easy to use. While deciding on the medium, you must consider the cost involved, reliability, speed, and availability of the backup medium. Backups are of two types, full backups and incremental backups. When you do a full backup you backup all the files and directories that you specify. When you do an incremental backup you backup only the files that have been modified since the previous backup.

You must backup all the user files and only those system files that you have modified to configure the system. You use the mount command to access the contents of a file system associated to removable media. You use the umount command to unmount a file system associated with removable media. In Linux, you can use the following utilities to make backups: tar, cpio, and dump. You use the tar utility to store, backup, transport, and archive files. You use the cpio utility to copy files to or from a cipio or tar archive. You use the dump utility to backup files from a file system. You can use the restore utility to restore files from a backup into a file system.

In Linux, you can use various utilities to compress files. These are: compress, gzip, and tar. You use the compress utility to compact a disk file into a file or smaller size. You can restore the file to its original form by using the uncompress utility. You can use the gzip utility to compress files. The corresponding utility to decompress files is gunzip, gzip –d or zcat. You can also use the gzip and gunzip utilities by using the –z option with the tar command.

Using Basic Networking Commands in Linux

You use the mesg command to grant or revoke permissions to other users to send messages to your terminal. You use the who –T command to display a single line output of the users currently logged on the status of the mesg permission. You use the talk command to send messages from one terminal to another terminal. You use the write utility to chat another user on the network.

You use the wall command to send messages to all users connected to the Linux server. You use the finger utility to display the status of all users currently logged on the Linux system. You use the chfn utility to change the information that is displayed by using the finger command. You use the ping command to verify whether or not a particular IP address exists and can accept requests. You use the traceroute command to determine the path that a packet takes to reach a destination form a source. You can use the ssh command to start a secure session to a remote Linux system.

You can use the ftp and ncftp commands to download and upload files in Linux. There are two components of the Linux e-mail system, MUA and MTA. MTA or Mail Transport Agent manages the process of sending and receiving e-mail. Sendmail and fetchmail are the commonly used MTAs. MUA or Mail User Agent is the user interface of the mail software. Some examples of MUA are the pine, elm, and mail utilities. The pine or program for Internet News and e-mail utility is a menu-driven and easy-to-use MAU utility.

Working in GNOME

The X window system is a windowing system that provides graphical user interface to Linux. The X system follows client server architecture. The windowing system uses window managers to keep track of open windows, their sizes, their status, and their movements on the desktop. The Desktop Environment is a collection of X clients and X utilities.

GNOME is a user-friendly desktop environment that can be run on multiple operating systems. The GNOME Desktop Guide displays all the virtual consoles and desktops that are configured on the system. The GNOME Main Menu is used as the starting point to launch all GNOME applications.

Red Hat Linux uses Sawfish as the default window manager for the GNOME Desktop Environment. The GNOME Control Center utility allows you to configure various aspects of the Linux system. GNOME has a graphical file manager that can be used to mange the files and directories on the system.

K Desktop Environment

The K Desktop Environment is a GUI for Linux that offers various utilities. The KDE desktop consists of the KDE Panel and the desktop area. The Application Starter menu is used to start the applications available in KDE. The desktop area provides icons that enable you to access frequently used applications. KDE provides multiple utilities and applications, such as: Konqueror, KOffice, KOrganizer, KSnapshot, and KMail. You can configure the KDE environment by using KDE Control Center. KDE provides the following additional utilities: KJots, KDiskFree, Menu Editor, Personal Time Tracker, and Archiver.

Installing Packages

The Red Hat Package Manger (RPM) is a very powerful and advance package management tool. RPM packages have the following advantages over the traditional form of package management: high reliability, possibility of upgrading old versions to newer releases of the software, easy uninstallation of packages, package verification and querying, protection from tampering (signatures).

The Package file names comprise the package name, the version and release of the package, and the architecture or platform on which the package can be installed followed by an .rpm extension. The following are some useful options that you can use rpm command: -i to install a package, -u to upgrade to a newer release of a package, -f to freshen a package, -q to query a package, -v to verify a package.

You can use the GnoRPM utility under the GNOME desktop environment for an easy interface to work with RPM packages. GnoRPM has menu options and controls on the main window from which you can install, uninstall, upgrade, query, and verify packages. You can install RPMs from the local machine or use the Web find option to download and install packages from the internet. You can customize GnoRPM form the Preferences option.

Administering Printers

The various components of the print services in Linux are: the printer device file, the spooler, the print queue, the printer capability database. The following are some of the printing commands in Linux: lpr is used to print files, lpq is used to check the print queue, lprm is used to remove print jobs form the print queue, lpc is used to control the printers, pr is used to format files for printing. You can configure the following types of printers on a Linux system: local printer, UNIX printer (lpd queue), windows printer (SMB share), Novel Printer (NCP queue), and JetDirecter printer. The printconf-qui utility is a menu-based utility in Red Hat Linux that allows users to mange and configures printers.

Computers

Microcomputers have four basic functions that are input, output, processing, and storage of data. The most popular input/output devices are the printer, monitor, mouse, and keyboard. The motherboard or system board is the most important part inside the case. The motherboard contains the central processing unit (CPU), a microprocessor, as well as access to other circuit boards and peripheral devices. All communications between the CPU and other devices must pass through the motherboard.

Data and instructions are stored in binary code in a computer, which uses only two states for data, on which is 1 and off which is 0. A ROM BIOS microchip is a hybrid of hardware and software containing programming embedded into the chip. These chips are referred to as firmware.

All hardware devices need a method to talk to the CPU, software to control how it will work, and electricity to give it power. Ports on the case give hardware outside the case a way of connecting to the motherboard. A circuit board in an expansion slot on the motherboard can be used to provide an interface between the motherboard and the peripheral device, or can be a peripheral itself such as an internal modem. Some of the system’s processing demands by the CPU are relieved by the chipset on the motherboard. The chipset also controls many components on the motherboard.

The CPU uses RAM to temporarily store data and instructions while it is processing both. RAM is stored on single memory chips called SIMMs and DIMMs. Cache memory is used on the motherboard as fast RAM to improve processing speed. The motherboard uses buses to communicate data, instructions, and electrical power to components on the board. The system clock is used to synchronize activity on the motherboard, by sending continuous pulses over the bus to different components.

CMOS chip on the motherboard is used to store setup or configuration information in the PC and by means of jumpers and switches. Electricity is supplied to components both inside and outside the computer case by the power supply inside the case. Some components external to the case get their power from their own electrical cable.

Secondary storage is a lot slower than primary storage, but it is also permanent storage. The most common examples of secondary storage devices are the floppy disk and the hard drive.

Three types of software are BIOS, the operating system, and applications software. BIOS is used before and after startup by the operating system to provide software control of hardware devices. Applications software relates to the operating system, which relates to BIOS and device drivers to control the hardware. Operating systems utilize BIOS by managing secondary storage and primary storage, help determine problems with hardware and software, interface between hardware and software, and perform various cleanup tasks. When a computer is first turned on, the startup BIOS is now in control. It later loads the operating system and then turns control over to it.

Users interact with the operating system by a command driven, a menu driven, or an icon driven interface. The most well known operating systems for computers are DOS, Windows, UNIX, Mac OS, and OS/2. True multitasking is not possible using CPUs built before the Pentium. DOS is being replaced by a graphical user interface such as Windows as the most popular operating system, but decisions made when DOS was designed still affect windows 9x today. The three types of logical primary memory in DOS are conventional memory, upper memory, and extended memory.

Software manages memory by means of memory addresses that point to locations in RAM; the number of memory addresses is partly limited by the number of wires on the bus devoted to these addresses. The size of the data segment that software can access at a single time is determined by the number of wires on the bus assigned for the data path.

Real mode was used by DOS and is limited to single tasking, a 16-bit data path, and 1024K of memory addresses. Protected mode allows more than one program to run at a time, it can use a 32-bit data path, and more than 1024K of memory addresses. In protected mode, the operating system manages access to RAM and does not allow a program to have direct access to it.

Virtual memory is “fake” memory where data is stored in a swap file on the hard drive. The operating system makes applications think that they are using real memory. When an operating system gets the command to execute a software program, it must follow explicit rules as to where it looks to find the program file for the software. A program must first be loaded into memory before the operating system can execute it.

Protecting Data, Software, and Hardware

The four system resources that help in the communication between hardware and software are I/O addresses, IRQs, DMA channels and memory addresses. An IRQ is a line on a bus that a device uses to alert the CPU that it needs services. A DMA channel is a channel that provides a shortcut for a device to send data directly to the memory, bypassing the CPU. A memory address is a hex number that are often written in segment/offset form and assigned to RAM and ROM so the CPU can access both. When the CPU wants to initiate communication with the device, the CPU will send the device’s I/O address over the address bus.

The startup BIOS performs a power-on self test (POST) that surveys and tests the hardware, checks setup information, and assigns system resources to the hardware. The startup BIOS then begins the loading of the OS. When the OS is loaded from the hard drive, the first program that is executed is the master boot record (MBR), which in turn executes the DOS boot record, the then attempts to find IO.SYS and MSDOS.SYS on the hard drive. The kernel, or core, of DOS is made up of the IO.SYS and MSDOS.SYS along with the COMMAND.COM. The commands that are used to customize the OS load process are CONFIG.SYS and AUTOEXEC.BAT. AUTOEXEC.BAT contains the WIN command, which after DOS is loaded, can execute Windows 3.x. Windows 9x uses Plug and Play to help install and assign resources to devices during the boot process. Windows 9x contains the text file MSDOS.SYS, which is used to customize the boot.

The 8 – bit ISA bus, which was used on early PCs, was later improved to the 16 – bit ISA bus that is still used in today’s PCs. To find out how many resources have been allocated to your Windows 9x system, use the Device Manager. When a hardware device sends an IRQ to the CPU, a hardware interrupt occurs. When the software sends an interrupt number (INT) to the CPU, a software interrupt occurs. The CMOS chip on the system board and the DIP switches or jumpers are able to store setup data. To back up your setup information, save a copy of your CMOS to a setup disk. Using many utility programs, such as Nuts and Bolts and Norton Utilities, can do this. Always back up important information on your hard drive. Keep your documentation in a safe place and protect your computer against static and electricity and power surges, for safety’s sake.

Understanding and Managing Memory

Memory is usually viewed as both physical memory installed on the system board and expansion boards and as logical memory managed by the operating system. There are two kinds of physical memory, RAM and ROM. In order for ROM or RAM to be used by the computer, memory addresses must be assigned to it. System BIOS is stored on ROM chips on the system board. Expansion boards sometimes have ROM chips on them, holding BIOS programming to manage a device.

The CPU uses memory in two ways, as main memory and as a memory cache. SRAM is fast, static RAM and is used as a memory cache, which speeds up the overall computer performance by temporarily holding data that may possibly be used by the CPU in the near future. On the other hand Dynamic RAM or DRAM is slower than SRAM because it needs constant refreshing where SRAM doesn’t require constant refreshing. DRAM is stored on two kinds of miniboards: SIMMs and DIMMs.

SIMM memory modules can use either EDO or FPM technology. EDO is faster and only slightly more expensive than FPM, but the system board must support this type of memory to make use of its increased speed. DIMM memory modules can use either BEDO or synchronous DRAM (SDRAM).

Direct Rambus DRAM and Double Data Rate SDRAM (DDR SDRAM) are two technologies that are contending to be the next DRAM technology standard. Flash memory holds data permanently until it is overwritten, and it is commonly used on Flash ROM chips and memory cards for laptop computers. Synchronous DRAM (which moves in sync with the memory bus) is a faster kind of memory than the less expensive asynchronous DRAM (which does not move in sync with the memory bus) found on SIMM memory modules.

When buying memory, you should beware of remanufactured and re-marked memory chips because they have been either refurbished or re-marked before resale. SRAM comes as either synchronous or asynchronous memory. Synchronous is faster and slightly more expensive than asynchronous memory. Synchronous SRAM san come as either burst or pipelined burst memory

COAST is a cache memory module holding pipelined burst SRAM chips. Logical memory is divided into conventional memory, upper memory, and extended memory, according to the memory address assigned to it. Upper memory is traditionally used to hold BIOS and device drivers. Drivers for video normally fill the A, b, and C range of upper memory addresses (hex addresses beginning with A, B, and C). The beginning of extended memory is called the high memory area and can hold a portion of DOS. Extended memory is located on an expansion board and is accessed by page frames given upper memory addresses. Windows can emulate expanded memory, by taking some RAM and presenting it to applications software as expanded memory.

The practice of copying BIOS from slower ROM chips to faster RAM chips for processing is called shadowing ROM. The area of RAM holding the BIOS is called shadow RAM. Virtual memory is space on the hard drive that is used by the OS as pseudo-memory. A RAM drive is space in memory that is used as a pseudo-hard-drive.

DOS and Windows 9x use the device driver himem.sys to manage extended memory. DOS uses EMM386.EXE to make more efficient use of upper memory addresses and to emulate expanded memory. An upper memory block (UMB) is a group of upper memory addresses made available to TSRs. Storing device drivers and TSRs in upper memory is called loading high. DOS can load device drivers into upper memory blocks by using the DEVICEHIGH command in CONFIG.SYS. DOS can load a TSR high by using the LOADHIGH command in the AUTOEXEC.BAT file.

The OS uses a swap file on the hard drive as virtual memory. Windows NT uses an approach to memory management that is altogether different from that of DOS and Windows 9x. Conventional, upper, and extended memory concepts do not exist in Windows NT. Memory modules must be installed on a system board in the slots of a memory bank according to the rules specified in the system-board documentation. There are a fixed number of memory configurations that a board supports.

Introduction to How Data is Physically Stored on a Disk.

The floppy disk is a popular storage device in the computer field for a couple of reasons, cost and convenience of storage. Data is stored on a floppy disk in concentric circles; these circles are called tracks or cylinders. These tracks are divided into sectors; each sector is capable of storing 512 bytes of data. The smallest unit of allocated space for a file is known as a cluster. On a 3 ˝-inch high-density floppy, one cluster is allocated the same amount of space as a sector, 512 bytes.

When a floppy disk is formatted, the process of formatting creates new tracks and sectors on the disk, and then the master boot record, file allocation table and the root directory are placed on the disk. In a DOS working environment, two hidden files and the file COMMAND.COM must be written for this disk to be a bootable system disk.

Installation of a floppy disk drive into a personal computer is a fairly basic task, the steps involved are as follows; anchor the drive into the floppy bay, connect the data and power cables, and entering the new drive information into the computers CMOS if needed. If two floppy drives exist in the same machine the way to tell drive A from drive B is a twist in the data cable signifies that drive as drive A.

Introduction to Hard Drives

Most of the hard drives that are around today use IDE technology. IDE technology uses a complex method of organizing the tracts and sectors on the hard disk. Older hard drives used either MFM or RLL technology, which used the same number of sectors on every track on the drive. The term SCSI hard drive refers to the bus used by the drive rather than to the technology of the drive.

There are many different types of SCSI buses and bus devices, including SCSI-1, SCSI-2, Wide SCSI, Ultra SCSI, and Ultra Wide SCSI. For every SCSI bus subsystem a host adapter with a SCSI controller and SCSI IDs assigned to each device, including the host adapter, is required. At the end of each SCSI bus, a terminating resistor is required. The terminating resistor can be either hardware or software.

The number of heads, tracks, and sectors on the disk determines the capacity of a hard drive. Each sector on the disk is capable of storing 512 bytes of data. The operating system views a hard drive through the file allocation table (FAT). The FAT lists clusters on the hard drive and how each is allocated. FAT 16 uses 16 - bit entries and FAT 32 uses 32 – bit entries to hold the cluster numbers. A hard drive is portioned into logical drives, or volumes. A table of the partition information is contained in the master boot record of the hard drive. Each logical drive contains a boot record, FAT, and a root directory.

The physical geometry of the hard drive is the organization of heads, tracks, and sectors on the drive. The logical geometry of the hard drive is the head, track, and sector information that the hard drive controller BIOS presents to the system BIOS and the operating system. The logical and physical geometry may not be the same, but should produce the same capacity when calculations are made. The system BIOS and software uses CHS, large mode, or LBA mode to manage a hard drive. The size and manufacturer of the drive determine which mode is used.

The directories on a hard drive contain the information about each file stored on the hard drive. The root directory in the main directory created when the drive is first formatted. Some commands to manage a hard drive include those that allow you to create and delete directories, modify the attributes of a file, and list the paths the OS can use to find software. Most modern operating systems, including DOS, Windows 3.x and Windows 9.x, all include commands or menu options to perform these tasks.

You can optimize drive space and access speed by reducing disk fragmentation, scanning the disk for errors, compressing the drive, and using disk caching. There are also removable hard drives. The most popular removable drives include Zip drives, Jaz drives, and SuperDisk drives. There are many factors that impact the price of removable drives. These factors include drop height, half – life of the data, interfaces to the CPU, and many other features. There are also removable drives that use a USB port, a parallel port, or a SCSI port to communicate with the CPU.

Hard Drive Installation and Support

Installing a hard drive requires a few different steps such as setting jumpers on the drive installing an adapter card, cable, and drive. You must also go into the CMOS and partition and format so you can install your software on the drive. An IDE(Integrated Device Electronics) hard drive can be setup to be a master drive, slave drive, or just used as the primary drive on a system. When installing a hard drive it is very important that there is no static electricity that may ruin the hard drive.

The EIDE standards can support a primary and secondary connection; with these extra connections it is possible to run up to four devices on a system. Under the EIDE standards hard drives, CD-ROM drives and other drives are also IDE Devices. LBA mode must be set in CMOS for large capacity drives in order for the BIOS to support the drive. With today’s technology autodetect recognizes the hard drive and sends this information to the BIOS.

Drives need a primary partition for the drive to boot. It can also contain an extended partition so that your drive may be used as more than one logical partition. Using more than one partition can help improve cluster size on hard drives bigger than 2GB. To install Windows 9x you must have DOS and a Windows 3x already installed on the system. To backup a partition table use the MIRROR command, Nuts and Bolts, Norton Utilities, or Partition Magic. During a SCSI drive installation you must install a host adapter, terminating resistors, setting SCSI ID’s and configuring the SCSI system. You can dual boot Windows 9x and DOS and Windows 3.x.

Never keep a PC in a high humidity environment, smoke near a PC, or leave a PC off long periods of time. There are a few types of software that can help you quickly recover lost data such as Lost and Found and Norton Utilities. Using Scandisk or CHKDSK you can recover lost allocation units that were lost due to not being shut down correctly. If you ever lose data do not write anything on that disk if you want to recover the lost data.

Troubleshooting Fundamentals

It is important to always protect you and your equipment while working on a computer. Never work on a machine while it is still powered on. Always be sure that it is unplugged. Always make sure that the device is protected from Electric Static Discharge or ESD. Some of the tools you will want to include are a repair kit, bootable disk, and diagnostic hardware and software. Another helpful tip is to remember the two rules for troubleshooting. First eliminate unnecessary hardware and software and second, trade components you know are good for those that you know are bad.

There are some personal things you can do also. You should learn to ask good questions while using good manners and diplomacy, which help you, understand the history behind the problem. A good way of solving intermittent problems is to keep a log of when they occur.

Problems with computers can be divided into two groups. They are the computer either boots or it does not. It is important to remember that diagnostic cards give error codes based on POST errors. Diagnostic software performs many test on a PC, some of the software programs use their own proprietary operating systems. Utility software can be update and repair device drivers and applications. Some utility software downloads these updates from the Internet.

It is a must that you keep hard copies of important information. Keep bootable disks containing the root directory files on your system. Keep backups of hard drive data and software. Protect documentation by keeping it in a safe place. Keep a written record of CMOS setup or save it to a disk.

Supporting I/O Devices

A new device that is added to a computer requires the installation of new hardware and software. In addition, resource conflicts can often occur due to the fact that most hardware devices require similar computer resources. Some of these resources include an IRQ DMA channel and I/O addresses. To determine which devices may be causing specific conflicts in a Windows 9x computer, Device Manager is used. To do the same in M.S. Dos, MSD is used. Device drivers are loaded differently depending on the bit mode in which the computer is running. For example, in 32-bit protected mode, specified registry entries cause drivers to be automatically loaded by Windows 9x. However, in 16-bit real mode, these drivers must be loaded from the command line in CONFIG.SYS.

In addition to device drivers, new devices require the usage of an additional port. While most computers provide only two serial ports and one parallel port, newer system boards also provide an additional one or two USB ports. System boards can also have up to four PCI slots. For older system boards, a general-purpose I/O card provides the serial and parallel ports used by devices. One suggested use of a serial port is for the connecting of two computers by a cable for a null modem connection. Data bits passing through a parallel port might sometimes lose their relationship with the byte the represent. Because of this problem, parallel cables should not exceed fifteen feet in length.

The three types of parallel ports are standard, EPP and ECP. Serial ports are controlled by a UART chip. The ECP parallel port uses a DMA channel. A USB connector, a PS/2 connector or DIN can be used for connecting a keyboard. Even though an LCD monitor yields better quality and takes up less desktop space, a CRT monitor costs less.

The USB port uses a separate bus for connecting USB devices. This bus uses only one set of system resources. IRQ’s are assigned to PCI slots during startup. However, in order to resolve conflicts on a system board that supports PCI bus IRQ steering, these IRQ’s can be reassigned by the operating system after booting. The bus slot an adapter is using must be considered when selecting a SCSI host adapter. Other factors to consider before making a selection is the device standard used by the host adapter, single-ended versus differential SCSI, SCAM compliance and whether or not the host offers bus mastering. Some hardware devices only support one or two SCSI devices. These devices are often sold bundled together with some inexpensive SCSI host adapters. A video card is rated by the bus it uses.

Multimedia Technology

Multimedia devices are better uses of sight and sound. When you are looking into purchasing multimedia devices it is important to get all the information that you can before purchasing, in order to get the right device for your needs. Multimedia PCs and devices are the latest technology offered by our ever-growing society.

Converting analog data, to digital data, makes the data able to be burned onto CDs or other types of recordable media. In order to transport the data securely and easily, it is much better to store the data on these types of media. Also, converting from analog to digital, you can get the greater number of samples, and each sample is more accurate.

The standard for transmitting and storage of synthesized sound is MIDI, Musical Instrument Digital Interface. A sound card uses pulse code modulation or PCM, which is a sampling method, to convert analog sound to digital. The two methods of synthesizing sound are FM and Wavetable. The Wavetable method is more expensive and more accurate than FM.

The MMX Pentium chip improves the speed of processing graphics, video, and sound, using improved methods. SSE on the Pentium III further improves MMX technology. In order to take full advantage of MMX or SSE technology, software must be written to use its specific capabilities.

CD-ROMs are read-only devices with data physically embedded into the surface of the disc. The speed of a CD-ROM slows down as the laser beam moves from the inside to the outside of the disc. The most common interface for CD-ROM drives is IDE, which uses the ATAPI standard, an extension of the IDE/ATA standard developed for tape drives and CD-ROM, so that they can be treated just like another drive on the system. CD-ROM drives can have an IDE or SCSI interface, or they can connect to the system bus through a proprietary expansion card or through a connection on a sound card. Data is only written to the bottom of a CD-ROM, which should be protected form damage.

If you have installed Windows 95 from CD, be sure that your Windows 95 emergency startup disk has the necessary real-mode drivers on it to support a CD-ROM drive when this disk is used as the boot device. Windows 98 normally puts these drivers on the rescue disk for you. Installing a sound card includes physically installing the card, then installing the sound card driver and sound applications software. Windows 9x supports multimedia sound without using other applications software, but applications that usually come with sound cards enhance the ability to control various sound features.

Digital cameras use light sensors to detect light and convert it to a digital signal stored in an image file using JPEG format. A DVD can store a full-length movie and uses an accompanying decoder card to decode the MPEG-compressed video data and Dolby AC-3 compressed audio. Video capture cards can be used to capture video images from VCRs, camcorders, and TVs for storage and manipulation on your PC.

Electricity and Power Supplies

Electricity is measured in voltage, which is a measure of potential electrical pressure in a system. Electrical current is measured in amps, and electrical resistance is measured in ohms. One volt drives a current of one amp through a resistance of one ohm, which is one watt of power.

Microcomputers require DC current, which is converted from AC current by the PC’s power supply inside the computer case. A multimeter is a device that can measure volts, amps, ohms, and continuity in an electrical system. Before replacing a damaged system board in a PC, first measure the output of the power supply to make sure that it did not cause the damage. A faulty power supply can cause memory errors, data errors, system hangs, or reboots. And it can damage a system board or other components.

The U.S. Environmental Protection Agency has established Energy Star standards for electronic devices, to reduce energy consumption. Devices that are Energy-Star-compliant go into a sleep mode in which they use less than 30 watts of power. PC’s that are Energy-Star-compliant often have CMOS settings that affect the Energy Star options available on the PC.

Devices that control the electricity to a computer include surge suppressors, one conditioners, and UPSs. A surge suppressor protects a computer against damaging spikes in the electrical voltage. Line conditioners level out the AC current to reduce brownouts and spikes. A UPS provides enough power to perform an orderly shutdown during a blackout. There are two kinds of UPSs: the true UPS, called the inline UPS and the standby UPS. The inline UPS is the more expensive, because it provides continuous power. The standby UPS must switch from one circuit to another when a blackout begins. An intelligent UPS can be controlled and managed from utility software at a remote computer, or from a computer connected to the UPS through a serial cable. Data Line protectors are small surge suppressors designed to protect modems from spikes on telephone lines.

Supporting Windows 3.x and Windows 9x systems

Within windows 3.x, most of the configuration information is stored in the .ini files, and some is stored in the Registry. Files with an .ini file extension are organized into sections, key names, and values. These files can also be edited in a text editor like notepad. Also lines in an .ini file that have a semicolon in front of them are comments and are ignored by the operating system. In windows 3.x, the reg.dat is used to store information about file associations, OLE information, and data about supporting programs and software.

When installing a new operating system, one should first scan and fix the hard drive by removing lost clusters, and bad chains, and also defragmenting the drive with utilities such as scan disk, and disk defragmenter. Backing up the autoexec.bat files and config.sys files for a windows 9x system is also recommended. Documentation files that are created during installation usually have a .wri, .txt, or .log file extension, and can be used to troubleshoot a bad or incorrect install.

The installation of new applications and software can cause conflicts within the operating system because the new application may overwrite an existing .dll file with an older version. The new .dll file may not work correctly with the existing software and previously installed applications. These files are located in the windows\system directory for a 9x based system. Certain utility software can track changes made to the windows\system directory, changes to the .ini files, and the registry so one can be sure no files are overwritten.

Windows manages memory using five different memory heaps. PC’s can operate in two modes, real mode or protected mode. Real mode limits programs to the first MB of memory, and allows direct access to I/O devices and uses a 16-bit data path. Protected mode lets the program access to memory address above 1 MB and prevents direct access to I/O devices and uses a 32-bit data path. Protected mode is also faster then real mode, and is generally preferred. A GPF, or general protection fault error, may represent many different software errors and memory violations. Windows application programs can give insufficient memory errors because some of the memory heap is not available, even though not all physical RAM is used. A memory leak is when a program does not release all of its memory addresses back to the heap when it unloads.

The core components of a windows 9x system are the kernel, the user, and the GDI processes. Windows 9x, as well as Windows NT, use the virtual machine concept to protect against program faults from currently running software. Memory paging is a windows 9x method of allocating a different set of memory address to different virtual machines.

Windows 9x can be customized by entries in the text file msdos.sys. When windows 9x starts, static VxD’s are loaded in real mode, the OS then switches to protected mode, in which the dynamic VxD’s are loaded. Plug and Play requires the use of 32-bit dynamic VxD’s. Pressing F8 when a windows 9x system is booting will allow the user to view and use the windows 9x startup menu, which is helpful in troubleshooting windows problems.

Plug and Play is a group of architectural standards designed to automate the installation of new hardware devices on PC’s. In order for a PC to be completely Plug and Play compatible, the system BIOS, all hardware devices, and the OS must support Plug and Play. The four components of the OS portion of Plug and Play are the configuration manager, the hardware tree, the bus enumerator, and the resource arbitrator. Legacy cards are not Plug and Play compliant and must be manually configured.

Windows 9x uses 32-bit drivers stored in extended memory, although it does support older 16-bit drivers stored in the first MB of memory. Windows loads these 16-bit drivers from either the config.sys or autoexec.bat files. The windows 9x registry uses 6 major branches for location of registry keys.

Understanding and Supporting Windows NT Workstation

Windows NT comes in two versions; there is a version for workstations and servers. Both of these versions can operate on standalone or networked PCs. Windows NT Server is special because it can operate as a domain controller on a domain. Windows NT does not claim to be fully backward-compatible with legacy hardware and software, unlike Windows 95 or 98.

Windows NT requires at least a 486DX Intel-based CPU, with 12MB of RAM, and 120MB of hard drive space. Windows NT is written for different CPU types and the installation for three different types of CPUs is included on the Windows NT CD-ROM. Microsoft maintains a hardware compatibility list (HCL). This is a list of devices that Microsoft assures are compatible with Windows NT.

Windows NT can operate on the FAT16 and NTFS file systems. NTFS offers more security and features than FAT16, but unlike FAT16, is not backwards compatible with other operating systems. This backwards-compatibility is important because a PC can be configured to dual boot between Windows NT and DOS or Windows 95 or 98.

Microsoft designed Windows NT with a modular approach. This design provides the ability for the operating system to be easily ported to other architectures. The two architectural modes of NT are user mode and kernel mode. Kernel mode is further broken down into executive services and the hardware abstraction layer, also known as HAL.

Each program is run under a separate thread, or system controlled operation, called a process. NT provides a component called NTVDM to institute a DOS-like environment for DOS and legacy Windows applications. Windows 3.x programs run using a program called a WOW.

Networking has been brought new concepts with Windows NT. Machines can be brought to workgroups, or a group of computers and users that share resources. In a workgroup, each machine controls what resources it provides. Machines can also be brought to domains, or a group of computers and users that are managed by a centralized computer. This centralized computer is called a primary domain controller, or PDC. Windows NT is unique that it requires usernames and passwords to logon or use any resources. The main user account is called the administrator account. This account has full access to every resource on the local machine, or if the computer is part of a domain, every resource on the domain.

There are four disks that are important to recover failed NT startups. Three of the disks are required to boot the Windows NT system and the fourth disk is the emergency rescue disk. The emergency rescue disk, or ERD for short, is used to recover critical system files on the hard drive, such as those that store the registry.
Operating Systems

There are four main memory management techniques used in any operating system, single-user systems, fixed partitions, dynamic partitions, and relocatable dynamic partitions. They all have three things in common in that they all require the entire program that is running, one; be loaded into memory, two; be stored contiguously (alongside), and lastly remain in memory until the job is completed. On the downside, each puts severe restrictions on the size of the jobs because they can only be as large as the biggest partitions in memory. These memory schemes were sufficient for the first three generations of computers, which processed jobs in batch mode. Turnaround time was measured in hours, or sometimes days, but that was a period where users expected such delays between submission of their jobs and pick up of output.

As users were able to submit their jobs via remote job entry stations, new methods of memory management were needed to accommodate the increased load on the central processor. The new memory allocation schemes that followed had two things in common, first programs didn’t have to be stored in contiguous memory locations, they could be divided into segments of variable sizes, or pages of equal size. These pages or segments could be stored wherever there was an empty block of memory big enough to hold it. Second, not all of the memory pages or segments had to be loaded into memory in order for the program to execute.

The memory manager has the task of allocating memory to each job to be executed, and reclaiming the memory when the jobs execution is complete. The memory manager is only one of four managers that make up the operating system. Once the jobs are loaded into memory using a memory allocation scheme, the processor manager must allocate the processor to process each job in the most efficient manner possible. The processor manager must allocate the CPU among all the system’s users. This is different than job scheduling in that it was designed to select incoming jobs based on their characteristics, and instantly allocate the CPU to the job processes.

Every operating system must dynamically allocate a limited number of resources while avoiding the two extremes of deadlock and starvation. There are several different methods of avoiding deadlock; prevention, avoidance, detection, and recovery. Deadlocks can be prevented by not allowing the four conditions of a deadlock to occur in the system at the same time. By eliminating at least one of the four conditions (mutual exclusion, resource holding, no preemption, and circular wait) the system can be kept deadlock free. The disadvantage of a preventive policy is that each of these conditions is vital to different parts of the system at least some of the time, so prevention algorithms are complex and to routinely executing them involves high overhead.

Deadlocks can be avoided by clearly identifying safe states and unsafe states and requiring the system to keep enough resources in reserve to guarantee that all jobs active in the system can run to completion. The disadvantage of an avoidance policy is tat the systems resources aren’t allocated to their fullest potential. If a system doesn’t support prevention or avoidance then it must be prepared to detect and recover from the deadlocks that occur. Unfortunately, this option usually relies on the selection of at least one “victim”, a job that must be terminated before it finishes execution and restarted from the beginning.

Multiprocessor systems have two or more CPU’s that must be synchronized by the Processor Manager. Each processor must communicate and cooperate with the others. These systems can be configured in a variety of ways. From the simplest to the most complex they are master/slave, loosely coupled, and symmetric. By definition these are multiprocessing systems. Multiprocessing also occurs in single processor systems between interacting processes that obtain control of the CPU at different times. The success of any multiprocessing system depends on the success of the system to synchronize the processors or processes and the systems other recourses.

The concept of mutual exclusion helps keep the process having the allocated recourses from becoming deadlocked. Mutual exclusion is maintained with a series of techniques including test-and-set, WAIT and SIGNAL, and semaphores (P,V, and mutex) Hardware and software mechanisms are used to synchronize the many processes but they must be careful to avoid the typical problems of synchronization: missed waiting customers, the synchronization of producers and consumers, and the mutual exclusion of readers and writers.

Device Management

Operating systems are the basis upon which a computer performs the function of managing data. Part of the operating system is the Devise Manager, which attempts to effectively manage the various components either inside the computer or attached to it. Managing the devices fall into four areas – tracking the status of each device such as tape drives, disk drives, printers and terminals. – allocating time to each device - scheduling to each device – and clearing the device when a job is finished.

File Management

File system management is the software responsible for creating, deleting, modifying, and controlling access to files. The file manager keeps track of each file, uses a policy to easily locate and access a specific file, provides security by authorizing access to files only to those with clearance, and removing files and returning them to storage when usage is completed. Files are stored in directories, groupings of like files and these files are then subdivided into subdirectories to further separate like groupings, such as an employee directory, subdivided into addresses, job description and payroll. Each file in a directory/subdirectory must have a unique name with file size only limited by the available memory storage space.

Network Organization Concepts

Computer networks are a means to connect multiple client computers so that all computers on the network can exchange data with each other, only restricted by security passwords - The President of a company can access all files while the payroll clerk can only access payroll data. Networks may be local area networks – one organization, metropolitan area networks – such as a large university with multiple buildings, or wide area networks – covering a country or the world. The size of the network will determine the physical configuration of the network and the hardware and software required.

Management Of Network Functions

Modern Network operating systems are referred to as distributed operating systems, which are a collection of all components of a computer network, controlled by a central computer or group of computers. A large number of independent client computers send their data to the central computer for storage and processing.

Systems Management

An operating system, whether a stand alone system or a complex network operating system, must control all operations of the system, both hardware and software. It must provide for memory management, processor management, device management and file management. An operating system is also responsible for system security and protection from computer viruses.

MS-DOS Operating System

The popularity of MS-DOS has been popular for many years. Written originally to serve users of several generations starting from the earliest IBM PCs up to and including the more sophisticated stand-alone machines that are now in use.The major strength in the introduction of MS-DOS was that it was the first standard operating system adopted as the standard among most PC manufacturers. The weakness of this system was that it was only designed for single-user and single-task systems. Therefore it cannot handle and support the more sophisticated application that requires multiple users and multitasking.

Windows NT Operating System

Windows NT was released as a robust system incorporating a graphical-user interface, an easy-to-use operation with the technical ability to work across several existing platforms. Designed to evolve over time, this operating system was designed to easily migrate to newer more sophisticated hardware platforms. To make a more international operating system, an effort was placed into a single binary system, capable of accommodating the characters of many different languages.

The major benefit for this system over previous systems was a significant improvement in security, providing consistent protection for data and their applications. The authentication model supports new interfaces from bank teller machines to retinal and fingerprint scanners. This system also introduced the implementation of different security architecture such as the Kerberos model, on top of an existing OS, thus extending NT’s reach into the marketplace.

UNIX Operating System

UNIX was written for programmers by programmers, and is very popular with users fluent in the ways of programming. The reasons for the popularity among it’s proponents include; a user interface, device independence, and portability. In the market today there are versions of UNIX that operate very large multi-user and single-user systems and everything in between.

For those who do not like this system, they cite system commands, and the many existing versions of UNIX with varying degrees of compatibility. Brief commands and the lack of a friendly human / computer interface also have users shy away from this type of system. UNIX syste ms have gone away from the trend of hiding the system from the user, this leaves the user with a better understanding of the system’s internal function and leading to a better understanding of the system, producing more efficiency and productivity.
WAN

Networking can be defined as the technology that connects multiple computes and enables them to exchange information. Data transmission can occur by two mechanisms: broadcast transmission and point-to-spoint transmission. Basic network topologies are: bus, ring, star, tree, mesh, and hybrid. In the bus topology, all the computes are connected to a single cable that acts as the backbone of the network. Looking at the ring topology, each computer has two dedicated links with its two immediate neighbors on both sides.

In the star topology, each computer has a dedicated point-to-point transmission that links to a central controller called a hub. The tree topology uses multiple hubs to connect the nodes. In the mesh topology, each computer is connected to every other computer through a dedicated point-to-point transmission link. A network based on the hybrid topology comprises subnetworks with different topologies.

A LAN refers to a network that is generally located within a building. A MAN refers to a network that has nodes located within a city. A WAN provides a communication link between networks located across large geographical areas. A dial-up connection is a temporary network connection that can be obtained by dialing up a distant computer or service provider through a local telephone line.

In a switched connection, a limited number of transmission channels are to be shared between multiple senders and receivers. A dedicated connection is a permanent connection between the sender and receiver. Circuits can be classified into the following types: coaxial cable, twisted-pair cable, and fiber-optic cable. One-way communication of data is known as simplex data transmission. In half-duplex transmission, data can travel in both directions, but not simultaneously.

In full-duplex transmission, data can simultaneously travel in both the directions. Analog data transmission refers to the transfer of data using varying electromagnetic waves. Digital data refers to a set values, in 0s and 1s, gathered from the analog signals by measuring the frequency of the analog signal at specific intervals.

Data encoding refers to the conversion of data to a format that is compatible with the destination device. Data decoding refers to the conversion of data into a format that is compatible with the receiving device. Data encryption refers to the modification of data into a format that is understandable only to the destination device.

DTE devices are the source and destination devices, between which the data needs to be communication. A DCE device translates data and makes it compatible with the receiving device. A DSU is used to transmit data and receive signals. A CSU is used to connect user devices, such as computers and phones to the service provider telephone network. Multiplexing can be defined as transmitting multiple digital signals over a single channel. Various types of multiplexing techniques are: Frequency-Division Multiplexing (FDM), Time-Division Multiplexing (TDM), and Wave-Division Multiplexing (WDM).

Internetworking and Internetworking Devices

Internetworking can be defined as the connection between multiple networks. The OSI reference model has seven layers to it. These layers are physical, data link, network, transport, session, presentation, and application. There are many devices used for internetworking. Some of these are repeaters, hubs, bridges, router, brouters, switches, and gateways. There are three layers in the Cisco hierarchical model. These layers are access, distribution, and core.

Switching

Switching is a mechanism through which multiple devices in a network can be connected to each other. There are three modes that switches operate in. These modes being: store-and-forward, cut-through, and fragment-free. There are three methods of switching. These methods are circuit switching, packet switching, and message switching. In circuit switching, a direct physical connection is established between two devices. The two methods of circuit switching are: space-division and time-division.

A cross bar is a space-division switch that is used to connect the input and the output in a matrix formation. Datagrams and virtual circuits are the two approaches used in packet switching. Message switching used the store-and-forward technique. Layer-2 switching is associated with layer 2 of the OSI model and uses MAC addresses for switching. Layer-3 switching, associated with layer 3 of the OSI reference model, is made up of three components. These components are packet switching, routing, and network services.

Data-Link layer and Medium Access Sublayer Protocols

A protocol is a set of procedures and rules to govern data communication. A standard ensures compatibility between products from different manufactures. The IEEE has defined three standards for LANs: Ethernet, Token Bus, and Token Ring. Ethernet uses the CSMA/DA access method for data transmission. The Token Bus standard is used in factory automation and manufacturing environments. The Token Ring standard uses the mechanism of token passing two access the media.

In character-oriented protocols, a frame in interpreted as a series of characters. In byte-count-oriented protocol the frame header has a byte count that that denotes the number of bytes in the data. In a bit-oriented protocol, a frame is represented as a sequence of bits. SDLC, HDLC, and LAP are examples of bit-oriented protocols.

The LLC sublayer is responsible for error detection, flow control, and framing. The MAC sublayer performs hardware addressing and media access. ALOHA is a multiple access protocol that is of two types: pure ALOHA and slotted ALOHA. Bitmap protocol and limited contention are other examples of multiple access protocols. MACA is a multiple access protocol used in wireless LANs. PPP is a protocol stack that is implemented at the data-link and physical layers of the OSI reference model.

FDDI is a LAN protocol that uses fiber cable as a medium for data transmission. Flow control is a mechanism to regulate the flow of data so that the receiver is not bombarded with excess data from the sender. Stop-and-wait and sliding window are two methods used to achieve flow control. The stop-and-wait method requires that the recipient should acknowledge each frame that is sent. The sliding window method uses an imaginary window at both the sending and receiving ends to achieve flow control.

Upper-Layer Protocols and VLANs

TCP/IP is a layered group of protocols of which the important ones are TCP and IP. Telnet is an application layer protocol that allows a user to log on to a remote computer. FTP is a TCP/IP protocol used to copy files from one host to another. SMTP provides a framework for managing devices on a TCP/IP network.

DNS is an application layer protocol that helps in identification of each host on the Internet with the help of user-friendly name. BootP provides configuration information during start up. DHCP is an extension of BootP that dynamically assigns configuration information. TCP is a connection oriented and reliable protocol operating in the host-to-host layer of the TCP/IP suite. UDP is a connectionless protocol.

IP is a network layer protocol that provides connectionless transmission. An IP address is made up of two parts: the network ID and the host ID. Subnetting allows an additional hierarchy by dividing a network into smaller subnetworks. The shortest path is defined as the path with the least number of hops in it.

In link state routing, an LSP is created by each router, which is sent to every router. In distance vector routing, each router creates a table with information about the network. In static routing, the information about the routes is entered manually into the router table. In default routing, the packets are sent to the immediate next hop router. In dynamic routing, routing protocols are used to update the routing table.

RIP and IGRP are the two most important protocols used in IP routing. SMDS is a high speed packet-switched technology offered by telephone companies for metropolitan area networks. DQDB is a protocol used by SMDS. VLANs are logical segments of a LAN that enable computers to communicate in such a way that the segments behave like individual LANs.

X.25

X.25 is a packet-switched protocol developed by CCITT. X.25 uses virtual circuits to establish a link between two devices that are a part of the network. Virtual circuits are logical communication channels established between the source and destination devices.

The X.25 protocol is a three-layered protocol. There are four layers of the X.25 protocol are: physical layer, frame layer, and packet layer. The physical layer defines the standards for using the physical medium between DTE and DCE. The frame layer provides flow control and error control between the user device and the network interface. The packet layer creates packets and manages the connection to the network.

Each layer of the X.25 supports different protocols. There are three steps that must be followed to make devices communicate on an X.25 network. First is to establishment virtual circuits between the user devices on the network. Second is to exchange packets between user devices through the intermediate nodes on a network path. Third is conversion of packets and bit stream at every node on the network.

Frame Relay

Frame relay is packet-switching technology. Frame relay uses only the physical and data-link layer of the OSI reference model. Frame relay is used to provide services to protocols with higher layer. Frame relay uses both permanent and switched virtual circuits.

The DLCI identifies the virtual circuits in frame relay. The BECN and the FECN bit of the address field are used for congestion notification. Frame relay traffic depends on different factors, such as access rate and CIR. The building blocks of a frame relay network are: virtual circuits, layers of the frame relay protocol, frame relay frame format, Local Management Interface (LMI), and network provider.

A frame relay network can use different methodologies for congestion control. The congestion control methodologies can be differentiated into the following two categories: open loop and closed loop. Traffic control methods help avoid congestion problems. Traffic control methods use parameters that define the traffic carrying capacity of a network for a specific connection.

Frame relay can be configured on a Cisco router to support either a point-to-point connection or a point-to-multipoint connection. LMI is one of the building blocks of frame relay networks. It was developed by Cisco systems, StrataCom, Northern Telecom, Digital Equipment Corporation. The Cisco IOS provides a number of EXEC commands that help monitor various frame relay connections on a network.

Asynchronous Transfer Mode (ATM)

With the use of ATM technology, data can be transmitted at high speed among the nodes on a WAN. An ATM network consists of endpoints, interface, and connections. A connection between two endpoints is established through transmission path, VPs, and VCs. A pair of VCI and VPI is used to identify a virtual connection.

ATM has a layered architecture. It contains three layers: AAL, ATM, and physical. The AAL layer is divided into five sublayers: AAL1, AAL2, AAL3/4, and AAL5. ATM services are categorized into two broad classes: real-time services and non-real-time services.

Integrated Services Digital Network (ISDN)

ISDN was an attempt made by the ITU-T to replace the existing analog telephone system with digital that could be used for both audio and data. ISDN services are divided into three classes. The first is bearer services, in which no modification of information contents by the network. Next is tele services, in which modification or processing of information by the network. And finally, supplementary service which cannot be used independently it must be used with bearer or tele services.

ISDN provides various types of channel. The most important ones are: the B-channel, this channel works as a basic user channel. Then the D-channel, this channel provides control signaling and low-rate data transfer. Finally the H-channel, this channel is used for the transfer of data at a high rate. BRI and PRI are the two types of interfaces used to access ISDN. BRI uses two B-channels and 1 D-channel, whereas PRI uses 23 B-channels and 1 D-channel or 30 B-channels and 1 D-channel.

The devices that use the services of ISDN are divided into functional groups. Network terminations, terminal equipment, and terminal adapters are the three types of functional groups. The ISDN architecture consists of three plans and each plan is made up of seven layers of the OSI model. These planes are: User plane, control plan, and the management plane. The user and control planes have the same physical layer. In the data-link layer, B-channel uses LAPB protocol, whereas D-channel uses LAPD, which is similar to HDLC. In the network layer, the data packet of D-channel is called a message. The four fields of the message are: protocol discriminator, call reference, message type, and information elements.

To implement ISDN with the help of Cisco routers, you need a TA and a router with a built in NT1. The dial-on-demand feature introduced by Cisco allows users to use the network links on demand and thus there is an effective reduction in WAN costs. B-ISDN, using fiber optic media, fulfills the needs of users who require a higher data rate of 600 Mbps. B-ISDN offers two types of services: interactive services and distributive services. The tree access methods available for B-ISDN are: 155.520 Mbps, full-duplex, 155.520 Mbps outgoing and 622.080 Mbps incoming, asymmetric full-duplex, and 622.080 Mbps, full-duplex.

SONET

SONET is a set of standards that defines the rates and formats of optical networks. Synchronous Transport Signal (STS) is the basic component of SONET that makes communication possible between the nodes of the network. STS comprises two portions: STS payload and STS overhead. SONET uses three basic devices for transmission: Multiplexers/de-multiplexers, regenerators, and add/drop multiplexers. The various levels of connections in a SONET communication system are called lines, paths, and sections.

Virtual tributaries provide backward compatibility to the SONET communication systems. The SONET standard consists of four layers: the photonic layer, section layer, line layer, and the path layer. Each SONET frame consists of 6480 bits. Interleaving is a process that eliminates the need to de-multiplex the higher rate signal. Concatenation is a process by which multiple STS level signals are combined and transmitted at a higher STS data transfer rate. Payload pointers and bit stuffing are the methods that help in the alignment and time synchronization of the frames.

Configuring Cisco Routers

Cisco IOS is used to run and configure Cisco routers. You can connect to a router trough its console, auxiliary, or Telenet ports. A router can be configured in either the setup or CLI mode. The setup mode provides and easy step-by-step method to configure a router. The CLI mode allows you to separately configure each parameter and provides more flexibility. A Cisco router uses five types of passwords: enable, enable secret, auxiliary, console, and Telnet.

Banners are used to display messages to the user. They can be of the following types: message of the day, exec, incoming, and login. The interface command is used to choose a specific interface of a router for configuring it. The ip address command is used to configure and IP address on an interface. The host name command is used to specify the name of a router. The ping command is used to test the connectivity of a device on a network. The trace command is used to trace the path that a packet takes to reach its destination on a network. The show interface command is used to verify the details of the interface on a router.

Managing a Cisco Internetwork

To modify and to view how a router boots and runs the configuration register value can be changes. The copy flash tftp command is used to back up the Cisco IOS to a TFTP host. If the router configuration is copied to a TFTP host as a second backup, you can restore the configuration using the copy tftp running-config command. Hostnames can be resolved to IP addresses in one of the following ways: building a host table and building a DNS. Telenet to a router or a switch can be quit by typing exit. CDP enables administrators to collect information about Cisco devices that are locally and remotely attached.

Configuration and Access Lists of Novell IPX

Novell Internetwork Packet Exchange (IPX) is derived from the Xerox Network Systems (XNS) protocol. In the interface provided by Novell, the communication can occur between the: client and server / server and server. Service Advertising Protocol (SAP) is used by NetWare servers to broadcast their services. This broadcast is known as the SAP broadcast. Routing Information Protocol (RIP) is used to share the routing information of data packets that may need to be transmitted to a specific node of the network. The IPX protocol stack refers to the set of protocols that constitute the family of IPX protocols used by the Novell NetWare. The IPX protocol stack includes NCP, SAP, RIP, NLSP, NetBIOS, SPX, IPX, and medium access protocols.

An IPX address is a unique address provided to each node by Novell NetWare. IPX encapsulation can be defined as the process of building frames from IPX datagrams or packets received from the upper layer protocols and transmitting them across the network. To enable IPX on Cisco routers, you need to: enable IPX routing and enable IPX routing on the interface.

To enter the global configuration mode, specify the config t command at the router prompt in the hyper terminal. To enable IPX routing, specify the IPX routing command in the global configuration mode of the router. To enter the interface mode, specify the int [Interface Type] command. To enable IPX routing on an interface, specify the ipx network number [encapsulation encapsulation-type] [frame] command. To configure the secondary address, specify the ipx network number [encapsulation encapsulation-type] [secondary] command in the interface configuration mode. To specify the interface for which the sub-interface needs to be configured, you need to specify the interface [Interface Type] [port] [sub-interface number] command. To configure the sub-interface, specify the ipx network number [encapsulation encapsulation-type] [secondary] command. You need to press Ctrl + Z keys to exit from the sub-interface configuration mode.

An access list is a list of conditions maintained by the router and can be used to monitor both inbound and outbound traffic of data packets. Access lists can be configured for IP and IPX protocols. Standard IP access lists filter the packets by checking only the source IP address of the packet that uses the IP protocol. To add an entry to the access list, specify the access-list access list number [deny | permit] source address [source-wildcard] command at the router prompt. To add an entry to the standard IPX access list, specify the access-list-number [deny | permit] source-address [.source-node [source-node-mask]] destination-address [.destination-node [destination-node-mask]] command at the router prompt.

To add an entry in the extended IP access list, specify the access-list access-list-number [deny | permit] [protocol | protocol-keyword] [source-address source-wildcard-mask | any] [destination-address destination-wildcard-mask] [log] [time-range time-range-name] command. To add an entry in the IPX access list, specify the access-list access-list-number {deny | permit) protocol [source-network] [[[source-node] source-node-mask]| [.source-node source-network-mask.source-node-mask]] [source-socket] [destination.network ][[[.destination-node]destination-node-mask] | [.destination-node destination-network-mask.destination-node-mask]][destination-socket][log] command at the router prompt. To add an entry in the IPX SAP filer, specify the access-list access-list-number [deny | permit] network [.node] [network-mask node-mask] [service-type [server-name]] command.

Configuring Cisco Catalyst 1900 Switches

Cisco Catalyst 1900 switches can be categorized into two types, standard edition and enterprise edition. Cisco has manufactured the following two models of the Cisco Catalyst 1900 switches: Cisco Catalyst 1912 and Cisco Catalyst 1924. There are two types of operating systems that are used by the Cisco switches: IOS-based and set-based. The tasks performed for configuring network settings for the 1900 Catalyst switches are: setting a password, setting a host name, setting IP information, configuring switch interfaces, configuring port security, and modifying the LAN switch type.

You can also view the status information of the switch by pressing the mode button. To enter the enable mode, you need to type the enable command. To enter the global configuration mode, you need to specify the config t command. To set the user and enable mode passwords, you need to specify the enable password command in the global configuration mode. To set the host name, you need to specify the hostname command in the global configuration mode. To set the IP configuration, you need to specify the ip address command in the global configuration mode. To set the default gateway, you need to specify the ip default-gateway command in the global configuration mode. To view the different interfaces, you need to specify the show interface command in the global configuration mode.

To assign a name for the interface, you need to specify the description command in the global configuration mode. To modify the number of hardware addresses supported by the switch, you need to specify the show port secure max-mac-count command in the global configuration mode. To view the configured VLAN, you can use the show vlan command in the global configuration mode. To configure trunks on a Fast Ethernet port, you need to specify the trunk [parameter] command. To verify the trunk ports, you need to specify the show trunk command. To configure a VTP domain, you need to set the following information on a switch: operating mode, domain name, password, and pruning capabilities of a switch. The delete a VTP database, you need to specify the delete vtp command.
Networking

A computer network provides the following advantages: access to remote programs and databases, communication facilities, resource sharing, and backup and fail over. Based upon the geographical area they span, networks can be classified into three types they are: local area networks, metropolitan area networks, and wide area networks. Various media used to connect devices on a network are categorized into two types: cabled and wireless. The most commonly used network topologies are: bus, ring, star, mesh, and hybrid.

Basic internetworking concepts include the following: Internetworking design and its components, network backbone services, and local-access service. The most commonly used hardware devices on a network are: network interface cards, cables, hubs, bridges, routers, switches, firewalls, and servers. Corporate networking infrastructure consists of the following essential services: messaging services, Web services, database services.

The standard creation process involves three components. The first is the participation of large number of special interest groups and organizations that belong to the mainstream business organizations. The second is representatives from the government, vendors, academic institutions, and consultancies. The third is standard creation taking place under the governance of an expert panel consisting of industry association and trade groups. The fourth is a chairperson for presiding over the standard-creation process. The standard creation process involves: introducing a proposal by a member of the SIG, amending the proposal on the basis of suggestions and testing done by the concerned SIG members at their respective ends, creating a complied draft and submitting it to the SIG’s parent group. Publishing the proposal and circulating it as an official standard, reviewing the standard annually to validate its applicability in the changing technological scenario.

Some of the most important standard-creating organizations are: the American National Standard institute (ANSI), the Comite Consultatif Internationale de Telegraphie et Teleponie (CCITT), the Electronic Industries Alliance (EIA), Internet Architecture Board (IAB), the Institute of Electrical and Electronics Engineers (IEEE), the International Standardization Organization (OSI), the Object Management Group (OMG), and The Open Group (TOG).

Network Design

Network design involves understanding various networking needs, mapping the business needs with networking goals, and trying to satisfy them in an optimal manner. The tasks involved in network design are: understanding the requirements, choosing suitable network architecture and NOS, choosing a suitable LAN, and choosing suitable WAN standards.

The various types of requirements are: business requirements, requirement for future growth of the network, requirement for working with existing infrastructure, Internet and intranet infrastructure, and security and accessibility. The various business requirements are: the cost and budget requirements, the type of applications to be supported, and the connectivity requirements.

You must choose the appropriate network architecture. This being either client/server or peer-to-peer. Then you must choose an appropriate network operating system. After that you must choose from various LAN standards, some of which are: Ethernet, Token Ring, FDDI, or a combination of these. Then you must choose an appropriate networking media.

You must design the cabling layout based on EIA/TIA standards. Then you must choose an appropriate WAN technology, and finally you must choose an appropriate WAN topology, either peer-to-peer, Ring, Star, Mesh, or a partial mesh.

Choosing Suitable Technology

Computers on a network follow certain sets of rules while communicating with each other or when transferring information across different networks. These sets of rules are known as protocols. There are two commonly used LAN protocols; these are TCP/IP and IPX/SPX. There are six commonly used WAN protocols; they are PPP, ISDN, HDLC, ATM, X.25, and frame relay. There are three commonly used wireless protocols; they are 802.11a, 802.11b, and 802.11g. There are three commonly used network management protocols; they are CMIP, SNMP, and SNMPv2.

Network Traffic Management

According to the OSI Model, network management covers the following functional areas: managing faults, managing configuration, managing security, managing accounting, and managing performance. The need for network traffic management arises for different reasons. These reasons include: an increase in the user response time, an increase in the time taken to download files from the Internet, poor accessibility to certain Web sites, and poor network resource performance.

Network traffic can be classified on the basis of: the type of Internet service used by the organization, the source and destination of the network traffic, users and service groups, and the direction of the traffic. There are five basic types of network traffic. These basic types are: bursty traffic, interactive traffic, latency-sensitive traffic, non-real-time traffic, and recreational traffic. When monitoring the network, you should collect the following network-related data: network availability, network latency, bandwidth utilization, network errors, protocol distribution, and host identification.

The router uses a priority filter to segregate the network traffic into the following priority queues: highest priority queue, high priority filter, medium priority filter, and normal priority filter. There are eight commonly used techniques for controlling network traffic. These techniques are: priority queuing, CBQ, TCP rate control, ATM GCRA, routers, ABC, protocol filtering, and multicasting and IGMP. A Protocol Analyzer is a portable tool that enables you to capture and analyze all traffic sent over the network. Etherpeek is a tool the monitor traffic load on Ethernet networks.

Technology and Service: A Comparison

Packet switching is a process to break down data into smaller segments called packets. In circuit switching, a dedicated physical connection is established between the sender and the receiver for the entire duration of a transmission. In message switching, each message is treated as an independent unit and includes its own destination and source address. When an intermediate device receives the message, it sores it until the next device is free to receive it. Cell switching breaks a data message into cells, which are the positioned on separate lines that are shared by numerous streams.

A public network is a network of networks in which the users at any one computer can, if they have the required access and permissions; obtain information form any other computer. A private network is a network of networks in which the users at any one computer can, if they have the required access and permissions; obtain information form any other computer within a specified domain.

Video conferencing is an audio-visual link between two or more sites that are geographically separated from each other. VoIP is a system for transmitting voice calls over data networks such as the ones the make up the Internet. There are various network access methods. These are: Ethernet, Gigabit Ethernet, Token Ring, and FDDI. Whenever you buy any computer and network-related product such as servers, modems, router, switches, and hubs it is highly recommended for the company or organization you represent to get an AMC for that item or product. ISPs provide on-ramp access as well as a wide range of services such as e-mail, Website hosting, corporate firewalls, and VPNs.

Access and Backbone Network Design

The access network is a part of the telecommunication network, and it connects the subscribers or users with its terminal exchange. The simplest way to link the user equipment with your switch is by using a pair of copper conductors. You can lay down some big cables so that nearby subscribers can utilize one or the spare pairs. This multi-pair cable gets terminated in a distribution point box or DP box.

You can group a number of DPs and serve them with a common cable. You need to set a cross connection point (CCP) from the exchange by brining a large cable and then serve all the DPs from that CCP. Access networks are flexible, easy to maintain, cost-effective, and take less time to build. There are two types of forecasting. These are: demand forecast and traffic forecast.

Routing service allows IP and IPX routing. Telecommuting services connect devices over a telephone network using ARA, SLIP, CSLIP, PPP, and XRemote. There are many advantages of integrated access solutions. The first is these solutions are cost-effective, flexible and efficient. Next, only one T1 line can be used instead of two T1 lines. Then, these solutions increase the profit margins of the carriers and their relative positions. Next, these solutions provide a wide range of services that allow network operators to bundle different services according to the requirement of their customers. Finally, these solutions provide modular services so that customers can easily and cost-effectively migrate form one kind of service to another.

A high-speed network connecting several networks in a single organization is known as a backbone network. There are four different types of backbone networks. These being: routed backbones, bridged backbone, collapsed backbones, and VLANs. The ideal backbone network should combine the use of layer 2 and layer 3 Ethernet switches. Layer-2 backbone networks are used when you need a cost-effective network. It provides very high performance and fast recovery in case of a failure. Layer-3 backbone networks are used when you need high performance for handling multimedia application based on IP unicast and multicast. There are four areas that affect the efficiency of the network and its associated throughput. These being: optimizing the frame and packet size, limiting segmentation, minimizing device delay, and sizing a window.

Addressing and Routing Design

There are five classed of IP addresses. These are class A, class B, class C, class D, and class E. There are three blocks of IP addresses that are reserved for private networks. These are 10.0.0.0 through 10.255.255.255, 172.16.0.0 through 172.31.255.255, and 19.168.0.0 through 192.168.255.255.

Subnetting creates multiple logical networks from a single class A, B, or C network. The subnet mask is a 32-bit number and consists of the network address, the subnet address, and the host address. There are two types of IP routing methods. These are direct routing and indirect routing. The route datagrams, IP uses the IP routing table.

Novell IPX supports multiple logical networks on an individual interface and each network requires a single encapsulation type. CIDR works by assigning an organization the number of bits that it requires for a network address. CIDR supports route aggregation, which enables routers to combine route to denote a single entry in the routing table. The NAT mechanism helps map the internal IP address of a network with the officially assigned external addresses. NAT was developed as a temporary solution to the problem of IP address exhaustion.

Operations and Network Management

Documenting a network includes three things. These things are documenting network hardware, documenting network software, and documenting network design. There are many basic network administration tasks. Some of these tasks in clued three of the following: managing network accounts, managing network performance, and managing network security.

Some of the basic network administration guidelines include: document the network and its resources, track all IP addresses and the network-addressing scheme, educate the network users about the proper use of the network and its resources, perform all the steps necessary to protect the network from internal and external security threats. Ensure that any computing component to be installed on the network is compatible with the existing network, and provide instant support for any network fault. Some of the commonly used remote management protocols are: Simple Network Management Protocol (SNMP), Desktop Management Interface (DMI), Web-Based Enterprise Management (WBEM).

Network Security and Backup

You need to secure networks from various security threats. Some of the mechanisms for securing the network are: authentication, access control, sever logs, antivirus software, firewall and proxy server, Intrusion Detection systems, and backup and off-site storage.

Backing up is a process of creating a copy of the data. Various types of backup media are: hard disk drives, magnetic tapes, and CD-RW. Backup modes are: full backup, incremental backup, and differential backup. Off site storage of backup prevents any loss arising out of local disasters.

Security Management

Network security means the security of data, network devices, and computer systems on the network. Physical security refers to the safety and protection of building sites, equipment, information, and software from theft, natural disasters, and accidents. SSL protects data traveling between two computers on a public network, such as the Internet. Most of the popular browsers, such as Internet Explorer and Netscape Navigator, support the SSL protocol. TLS, a successor to SSL, ensures the integrity and privacy of data transmission between two applications.

Cryptography is the mechanism of changing a message into an unreadable form. The process that changes plaintext content to an unreadable form is call encryption. A key is a work, number, or phrase that you use to encrypt and decrypt messages. Symmetric key encryption is an encryption method that uses a single key for both encryption and decryption. Asymmetric key encryption is an encryption method that uses a pair of keys for encryption and decryption. Both symmetric and asymmetric key encryption methods ensure basic integrity of data. A message digest is a process that changes messages to fixed values. A message digest helps users verify the integrity of messages by comparing hashes to original messages.
Database

A database is a collection of related and structured data that can be easily accessed and retrieved. A database consists of rows and columns, which are used for storing data. A DBMS is software that allows you to create, modify, delete, and add data to database files. In addition, a DBMS helps you to generate reports by using this data. A DBMS has the following advantages: enables centralized data management, avoids data redundancy, prevents unauthorized access to a database, establishes data integrity, and reduces costs in an organization. A DBMS provides you functionality to manage data in a database. These are: data definition, data manipulation, optimization, data security, data recovery, data concurrency, and data dictionary.

A DBMS architecture consists of three levels: external-level view, conceptual-level view, and internal-level view. The external-level view of a DBMS is the view visible to the DBMS users. The conceptual-level view of a DBMS lists the database components, the relationship between the components, and the data types for the data to be stored. The internal-level view consists of the structure that depicts how data is actually stored in a database.

A DBMS resolves any differences between the external, conceptual, and internal-level views by following the process of mapping. Data independence refers to the concept where an application program running on a database can function independent of the data storage structure or data access techniques specified for the data. Data independence can be of two types: logical data independence and physical data independence. DBMS models can be to two types: File-based model and database model. Database models can be further classified as: network model, hierarchical model, and relational model.

Data Models

A database is an organized collection of data. The different types of databases that are used to design complex applications are: object-oriented databases, relational databases, and object-relational databases. A data model is a collection of conceptual tools that you can use to describe the structure of a database. The data models are broadly classified into two types according to the concepts they use to design the database structure. They are: object-based logical models and record-based logical models.

The E-R model is based on how users perceive objects in the real world. The various components of the E-R model are: entities, attributes, relationships, subtypes and supertypes, mapping constraints, and keys. The different types of attributes that exist in an E-R model are: simple and composite, single-valued and multivalued, stored and derived, null values, and key attributes. The two main types of relationship constraints are the mapping cardinalities and the existence dependencies.

The different features provided by the object-oriented data model are: objects, object types, object identity, object containment, inheritance, methods, object tables, object views, REF data type, collections. The three most widely used record-based logical models are the hierarchical, the network, and the relational model. The relational model represents the database as a collection of relations. The basic components of a relational model are: relation, tuple, attribute, and domain. The fundamental relational operators present in a relational algebra are: selection, product, project, division, join, intersection, difference, and union. Set functions are predefined functions used in select specifications.

Relational Database Concepts

Dr. E.F. Codd published a list of twelve rules that define an ideal relational database. These are: the information rule, the guaranteed access rule, the systematic treatment of null values, the active online catalog based on the relational model, the comprehensive data sublanguage rule, the view updating rule, high-level Insert, Update, and Delete rule, physical data independence, logical data independence, integrity independence, distribution independence, the nonsubversion rule.

Integrity refers to the correctness or accuracy of data in a database. The different integrity constraints specified on a relational database are: domain constraints, referential integrity, assertion, triggers, and functional dependencies. A database depicted in an E-R diagram is represented as a collection of tables in the relational model.

An attribute should be able to identify its entity, refer to another entity, and describe its owner entity. If there are certain entities that have common attributes, you need to merge the two entities to form a single entity. The primary key is used to identify the rows of a table.

Normalization is the process of reducing tables to smaller tables without any loss of data. For a table to be in 1NF there should be no repeated data. A table is said to be in 2NF if none of the field values can be derived from another field. A table is said to be in 3NF if none of the columns in a table have any dependency on any other non-key columns. BCNF is an updated version of 3NF. Whenever a relational table has multiple candidate key, or the keys are composite, or the candidate keys overlap each other, there is a need for BCNF. Denormalization is the process of moving from higher to lower of normalization to enhance the performance of a database.

An Overview of SQL

SQL is a high-level programming language. SQL provides an interface to communicate with relational database. SQL has a set of commands that are used for: date definition, data retrieval, data manipulation, access control, embedded SQL, dynamic SQL, and Transaction SQL.

There are three languages embedded in SQL. These are: DDL, DML, and DCL. DDL is a language that defines how data is stored in a database. The commands available in DDL are: CREATE, ALTER, DROP, TRUNCATE, and COMMENT. The CREATE command is used to create tables and domains.

The following are the different constraints that can be applied on tables: the PRIMARY KEY constraint, the UNIQUE constraint, the FOREIGH KEY constraint, the CHECK constraint, and the DEFAULT constraint. In addition to the CHECK and DEFAULT constraints, rules and defaults are used to perform restrictive operations on columns.

Data Manipulation and Control Using SQL

DML consists of a set of commands that you can use to manipulate data. By using DML, you can: retrieve data, insert data, modify data, and delete data. The different commands in DML are: SELECT, INSERT, UPDATE, and DELETE. The clauses used in the DML commands are: WHERE, ORDER BY, and GROUP BY.

The three types of operators are: conditional operators, logical operators, and arithmetic operators. You can use the following functions in DML: SUM( ), COUNT( ), MAX( ) and MIN( ), and AVG( ).

You use the keyboard JOIN in the join command to select data from two or more tables in a database by using a single SELECT command. The three types of joins are: inner join, outer join, and self join. A view is a virtual table consisting of a number of columns from different tables.

A stored procedure is a manageable group of SQL commands that is executed as a single block. A trigger is a block of code that is activated automatically in response to certain actions. DCL is a part of SQL that provides intrinsic security mechanisms. The commands available in DCL are: COMMIT, ROLLBACK, GRANT, and REVOKE.

Transaction Processing

Transaction processing is a term used in connection to large multi-user systems. Transaction processing systems use large database that are concurrently executing multiple database transaction. A transaction is a unit of database that includes operations to access a database. There are two types of transaction: Read and Write. To save or undo changes made to a transaction, you can execute the following statements: the COMMIT operation and the ROLLBACK operation. There are five transaction states: active, partially committed, failed, aborted, and committed. Transactions should possess certain properties. These properties should be enforced by the database system. These properties are: atomicity, consistency, isolation, and durability.

The system maintains a transaction log to recover from failures. The transaction log keeps track of all the transactions on the database. When multiple transaction execute concurrently, database consistency cannot be achieved even if individual transactions executing correctly. To confirm consistency, you need to ensure that concurrent transaction execute serially.

Installing and Configuring SQL Server 2000

SQL Server 2000 meets the scalability and reliability requirements of most demanding enterprises. It is also appropriate for a broad range of applications, such as e-commerce and data warehousing. Depending on your requirement, there are different editions of SQL Server 2000 available in the market. These include: the Enterprise edition, the Standard edition, the Personal edition, the Developer edition, the Evaluation edition, the Desktop edition, and the Windows CE edition.

Similar to the installation of most Microsoft product, the installation of SQL Server is also simple and easy. To install SQL Server, you must understand each option and implications of each option selected during installation. After SQL Server is installed on your computer, you must configure it.

The data and objects in a database system are stored as a set of files. The files that are used to store database objects include: Primary data file, Secondary data file, and Transaction log file. To create a new database, use the CREATE DATABASE statement. To modify the size of a database, use the ALTER DATABASE statement. If you want to view database information, execute the statement, sp_helpdb . To rename a database, execute the statement, sp_renamedb ‘old_name’ ‘new_name’. To delete a database from the system, use the DROP DATABASE statement.

Data Security

Security refers to the protection of data against any unauthorized access. The database administrator is the central authority for maintaining databases and is responsible for the overall security of a database. In multiuser systems, the database system should allow only those users or user groups to access the database that has permission to access the database.

The most common security mechanism that is user in database systems is discretionary access control. Depending on the discretionary access control, the security scheme of a database system is based on four concepts: users, roles, database objects, and privileges. The other security mechanisms that are used include: Mandatory access control, Audit trails, Views, and Data encryption.

A security scheme can be implemented by using the following SQL statements: GREANT and REVOKE. In mandatory access control, the data and users are classified based on security levels. Each database object has a classification level, such as: Top secret, Secret, confidential, and unclassified.

A database audit reviews the transaction log to examine all the operations preformed by various users during a period. If an illegitimate activity is found, the database administrator can determine the account number for the activity. Views can be use to hide sensitive information from users and thus enhance security. Data encryption is used for highly sensitive data. Data encryption stores and transmits data in an encrypted form.

Accessing Databases from Applications

The acronym ODBC refers to Open Database Connectivity. It is a specification for creating a database API. An ODBC is implemented as Call-Level interface. There are different ODBC drivers for different databases. There are four main components of ODBC: Application, Driver Manager, Driver, and Data Source.

DSN is used to define the name for the data source. ADO is a data access technology that is independent of any language or and OLE DB provider. ADO crates a layer between applications and an OLE DB provider. The ADO object model consists of nine objects and four collections: Connection, Error, Command, Recordset, Record, Parameter, Field, Property, and Stream. The collections of the ADO object model are: Fields, Properties, and Errors.

The steps to access data from a database using ADO are: set up a reference to an ADO connection object, define the connection string to be used, open the connection object, open the state property of the connection object, execute SQL statements, create a recordset object, and close the recordset and connection objects.

XML is one of the markup languages that you can use to develop applications, which operate over the internet. XML provides a way to define meaningful data structures, and eases the process of data exchange on the Web. You type an XML program in Notepad and view its output using an internet browser. An XML document consists of tags and elements in a hierarchy.

Rules to create an XML document are: every start tag should have a closing tag called the end tag, tags should not overlap because an XML document follows a hierarchy, there can be only one root element, and XML is a case-sensitive language. You use the FOR XML clause to integrate SQL statements with XML. Using this function enables an XML document to access data from an SQL database.

The basic parts of the FOR XML clause are: RAW, AUTO, and EXPLICT. You use the OPENXML clause to insert and update data in a database from and XML document. This function uses two procedures: sp_xml_preparedocument and sp_xml_removedocument. There are five distinct parameters of an OPENXML function: the parameter, the rowpattern parameter, the flags parameter, the Schema Declaration parameter, and the TableName Parameter.

Implementing and RDBMS

There are different kinds of architecture in which a DBMS can be implemented. The architecture can be classified as follows: centralized DBMS architecture, client/server DBMS architecture, parallel DBMS architecture, and distributed DBMS architecture.

The two DBMS models initially adopted client/server DBMS architecture are the relational DBMS and object-oriented DBMS. The parallel DBMS architecture consists of different models to implement parallel processing in an organization. These models are: the shared-memory architectural model, the shared-disk architectural model, the shared-nothing architectural model, and the hierarchical architectural model.

Database replication is the process of making copies of a database available to different database users spanning across different locations. Database replication helps in: data sharing, reducing network traffic, and database backup. There are different database replication models available for making replicas of the main database. These models are: the snapshot model, the transactional model, and the merge model.

Performance tuning comprises various measures for enhancing the performance of a database. The performance tuning measures can be categorized as follows: creating indexes, creating large tables, and defining SQL queries. There are tools available for monitoring the performance of servers. These tools help database users analyze the functioning of a server and take appropriate steps to enhance server performance. These tools are Performance Monitor and Query Analyzer.
Capstone Project

In the Network Capstone Project class, an entire school network was to be designed and built by each student according to specifications given by the instructor, Mr. Nick Awwad. Mr. Awwad used his skills and experiences as a network consultant to expose each student to the demanding regiment of network consulting as well as prepare each student to assume the role of a network administrator in the industry. The class was divided into several teams of students. Each team comprised of four to five individuals possessing technical computer skill levels ranging from novice to expert. The purpose of this was to simulate a possible work environment that each student might face following graduation. This project required a recollection of computer networking skills acquired at ITT Technical Institute since the first day of enrollment as well as a learning of new computer skills needed to stay competitive in the job market today.

The project requirements included not just the designing and actual building of the school network, but also the writing of full documentation detailing the entire process. This documentation was also to include diagrams showing the layout of the network as well as other information pertinent to its growth and maintenance. In addition to working together as a team to build a single network, it was recommended that each student also replicate the work of the other team members himself to facilitate his own individual learning. By doing this, each student learned about all of the intricacies involved in building a network from the ground up regardless of the team to which he was a member.

The network to be built was for a fictitious school called TTI Technical Institute. This school had branch offices in various locations across the United States. Each team was responsible for building the network for a particular branch office location specified by the instructor. In addition, Mr. Awwad assigned a particular network ID to each team. Thus, each team operated on a separate network. The network was heterogeneous in nature, consisting of Windows NT Domain Controllers, Windows 2000 Servers, a Linux Web Server, a Netware 5.1 Server and workstations running either Windows 2000 Professional or Windows NT 4.0. Each student installed a software program called Vmware on his own computer. This program allowed each student to install and run multiple operating systems on a single computer as virtual machines. This program was used to conserve on physical resources available at the school for each student. Each “virtual machine” behaved as if it were a separate computer connected to the network.

The first aspect of the project involved the installation of Windows NT 4.0 Server to function as a Primary Domain Controller on a specified domain. Upon installation, a service pack was installed, all necessary drivers were updated and the web browser was upgraded. The TCP/IP network settings were configured according to the network ID that was assigned by the instructor to each team. The DHCP, DNS and WINS services were then installed and configured on this computer. The DHCP Server was not activated. The DNS and WINS settings were configured to include the address of this DNS and WINS Server. Three more installations of Windows NT 4.0 Server were done to function as Backup Domain Controllers. The Primary Domain Controller had to be installed first so that the Backup Domain Controllers could join the domain during installation. A service pack was also installed on each Backup Domain Controller, all necessary drivers were updated and the web browser of each was upgraded. The TCP/IP network settings were configured to ensure that each Backup Domain Controller was on the same network as the Primary Domain Controller. The DNS and WINS settings were configured to include the address of the DNS and WINS Server on the Primary Domain Controller.

The next aspect involved the installation of Windows 2000 Server on a computer. This computer was used as the file server in which various applications were stored for later use. Windows 2000 Server was used rather than Windows 2000 Professional so that an unlimited number of users could connect to this server simultaneously. Windows 2000 Professional only allows ten concurrent users. Each team member decided which applications would be needed for the project and had to be stored on the file server. These applications were then downloaded from the instructor’s file server. For the remainder of the project, each team was dependent on their own filer server for the applications they needed. Once all the needed applications were downloaded, the TCP/IP network settings were configured so that the file server would be on the same network as the domain controllers. The DNS and WINS settings were configured to include the address of the DNS and WINS Server on the Primary Domain Controller. After that, the network identification properties of the computer were changed so that the newly constructed file server could join the specified domain. A service pack was installed, all necessary drivers were updated and the web browser was upgraded.

Once the domain controllers and the file server were on the network, Red Hat 7.3 Linux was installed. It was important that the apache and samba services were installed during the Linux installation. Network settings such as TCP/IP were configured so that the Linux machine was on the same network as the other computers and to find the DNS server on the Primary Domain Controller. Once the apache service and samba service was configured correctly then started, a designated user could upload an HTML file (or other file) to the Linux Web Server from a Windows NT or Windows 2000 computer. That new file could then be viewed through a web browser from any computer on the network. The Linux machine was configured so that the apache and samba services would start automatically whenever the computer started.

A Netware 5.1 Server was installed to support legacy applications. This server was configured during installation to be on the same network as the other computers using only the IP protocol. For this project, no further modifications were made to this server other than installing it. The other team members installed a Netware Server also, but joined the existing Netware Server that was installed first. A Netware 5.1 Client was installed on every Windows NT computer and configured to use only the IP protocol.

Windows NT 4.0 Workstation was installed on one computer and Windows 2000 Professional was installed on another. Both installations were configured with a static IP address so that they could join the specified domain. The DNS and WINS settings were configured to include the address of the DNS and WINS Server on the Primary Domain Controller. A service pack was installed, all necessary drivers were updated and the web browser was upgraded.

A Norton Antivirus Server, including all management utilities, was installed on the Windows 2000 File Server. A folder that would contain all of the virus updates was created on the hard drive of the file server and shared. The Norton Antivirus Server was configured so that the Norton Antivirus software would download virus updates from a website and store the downloaded files in this folder. Further configurations were made to ensure that the Norton Antivirus software on any other computer in the domain would download virus updates from this shared folder on the File Server. Through the server’s management console, the Norton Antivirus software was then deployed to all the other computers in the domain. All these other computers in the domain were restarted. The Norton Antivirus software was started on these computers and tested to confirm that the software downloaded the virus updates from the Norton Antivirus Server and not the website.

All of the computers in the domain were disconnected from the school network and connected to a single hub that was not connected to the school network. The TCP/IP settings of the Windows NT Workstation computer and the Windows 2000 Professional computer was changed from using a static IP address to obtaining an address automatically. The DHCP server on the Primary Domain Controller was activated. The Windows NT Workstation computer and the Windows 2000 Professional computer were then rebooted to test whether or not the computers would obtain an IP address from the new DHCP Server.

A Ghost Server was installed on a Backup Domain Controller. A folder that would contain all computer images was created on the hard drive of the computer. A Windows NT Workstation computer was started with a Ghost Boot Disk. This Ghost Client was configured along with the Ghost Server so that an image would be taken of the Windows NT computer and stored in the folder on the Backup Domain Controller. After the image was taken, the Windows NT Workstation computer was then started again with the Ghost Boot Disk. This Ghost Client was configured along with the Ghost Server so that the image that was previously stored was sent to the Ghost Client.

The Internet Information Server 4.0 was also installed on this Backup Domain Controller. The optional feature of an NNTP News Server offered through the Internet Information Server was installed. A newsgroup that required a login password was set up through the Internet Information Server.

Microsoft Visio Professional 2000 was used to create a network diagram of the fictitious TTI Technical Institute branch office. Microsoft Word 2000 was used to write the documentation for the project. The software program, VMware, was used to obtain screen captures used in the documentation. Blank data CDs were used to create backup copies of applications used in this project. MediaFace 4.0 by Fellowes was used to create CD labels.
1