Terminology means Technical Definition, here are some terminologies used for computers, they are categorized alphabetically, either you scroll down the page until you reach the terminology you wants, or it would be easier to click the terminology you want from the list. I hope you like this page and make a good use of it, also there will be an updates in the future, you may visit it regularly. 

Choose.gif (440 bytes)a.gif (124 bytes)b.gif (127 bytes)c.gif (131 bytes)d.gif (129 bytes)e.gif (116 bytes)f.gif (114 bytes)g.gif (141 bytes)h.gif (121 bytes)i.gif (71 bytes)j.gif (85 bytes)k.gif (124 bytes)l.gif (75 bytes)m.gif (142 bytes)n.gif (103 bytes)o.gif (137 bytes)p.gif (92 bytes)q.gif (140 bytes)r.gif (129 bytes)s.gif (133 bytes)t.gif (80 bytes)u.gif (127 bytes)v.gif (130 bytes)w.gif (150 bytes)x.gif (124 bytes)y.gif (119 bytes)z.gif (117 bytes)

Letter C : C - Certificate Authority - Cabinet (CAB) File - Cache Memory - CDR - CD-Rom - Celeron - CGI - Chassis - Chip - Chipset - Client/Server - Clock Speed - Cluster - CMOS - Cobol - COM - Command Interpreter - Compiler - C++ - CPU - Cyber

 

  C

Top Of Page

   C is a structured, procedural programming language that has been widely used both for operating systems and applications and that has had a wide following in the academic community. Many versions of UNIX-based operating systems are written in C. C has been standardized as part of the Portable Operating System Interface (POSIX).

   With the increasing popularity of object-oriented programming, C is being rapidly replaced as "the" programming language by C++, a superset of the C language that uses an entirely different set of programming concepts, and by Java, a language similar to but simpler than C++, that was designed for use in distributed networks.

 

  CA (Certificate Authority)

Top Of Page

   A CA (certificate authority) is an authority in a network that issues and manages security credentials and public keys for message encryption and decryption. As part of a public key infrastructure (PKI), a CA checks with a registration authority (RA) to verify information provided by the requestor of a digital certificate. If the RA verifies the requestor's information, the CA can then issue a certificate.

   Depending on the public key infrastructure implementation, the certificate includes the owner's public key, the expiration date of the certificate, the owner's name, and other information about the public key owner.

 

Selected Links
Marc Branchaud's paper, A Survey of Public Key Infrastructures includes a tutorial on how public key cryptography works and compares several PKI approaches.
IBM's Security Technologies site describes the X.509 certificate and the Public Key Infrastructure.
VeriSign is the leading certificate authority, providing over 125,000 Web sites with SSL server certificates, mainly for use in e-commerce.

 

  Cabinet (CAB) File

Top Of Page

   In Microsoft program development, a cabinet is a single file created to hold a number of compressed files. A related set of cabinet files can be contained in a folder. During installation of a program, the compressed files in a cabinet are decompressed and copied to an appropriate directory for the user. A cabinet file usually has the file name suffix of ".cab".

   Microsoft uses cabinet files in distributing its own products, such as PowerPoint, Microsoft Office for Windows, and Microsoft Money. Cabinet files save space and time during software distribution. They are decompressed during installation. Large files can be compressed and included in more than one cabinet file, each of which logically points to the next file, with all contained in a logical folder.

   Development accountability for cabinet files is ensured by providing a signed digital certificate with the cabinet file. One "signature" covers all the files in a cabinet file. Cabinet files are created using Lempel-Ziv compression.

 

  Cache Memory

Top Of Page

   Cache memory is random access memory (RAM) that a computer microprocessor can access more quickly than it can access regular RAM. As the microprocessor processes data, it looks first in the cache memory and if it finds the data there (from a previous reading of data), it does not have to do the more time-consuming reading of data from larger memory.

   Cache memory is sometimes described in levels of closeness and accessability to the microprocessor. A level-1 (L1) cache is on the same chip as the microprocessor. (For example, the PowerPC 601 processor has a 32 kilobytes level-1 cache built into its chip.) Level-2 cache is usually a separate static RAM (SRAM) chip. The main RAM is usually a dynamic RAM (DRAM) chip. SRAM does not have to be electromagnetically refreshed as DRAM does, and is therefore more expensive. A popular SRAM (or cache memory) size is 1048 kilobytes (1 megabyte). Typical DRAM sizes are 4 megabytes to 32 megabytes.

   In addition to cache memory, one can think of RAM itself as a cache of memory for hard disk storage since all of RAM's contents come from the hard disk initially when you turn your computer on and load the operating system (you are loading it into RAM) and later as you start new applications and access new data. RAM can also contain a special area called a disk cache that contains the data most recently read in from the hard disk.

 

Selected Links
Tom's Hardware Page includes The RAM Guide.
Kingston Technology offers The Ultimate Memory Guide.

 

  CDR

Top Of Page

   CDR (Computed Dental Radiography) is a technology for capturing a computerized image or radiograph of your teeth and gums that requires 90% less radiation than conventional x-ray film. CDR images are captured by sensors placed in the mouth that relay data to an attached computer, where the images are stored for viewing. Unlike conventional dental x-rays, CDR images can also be enlarged up to 300 times normal size to aid in diagnosis. They can also be manipulated for contrast and the colors can be adjusted. Images are captured instantly and do not require film developing and mounting.

 

Selected Links
Schick Technologies offers detailed information at their Web site.

 

  CD-Rom

Top Of Page

   In computers, CD-ROM technology is a format and system for recording, storing, and retrieving electronic information on a compact disk that is read using an optical drive. A CD-ROM player or drive does not allow writing to the disk.

   A WORM (write once, read many) device is used to write information to a master disk from which CD-ROM disks are replicated.

 

  Celeron

Top Of Page

   Celeron is the low-end (and low cost) member of the family of microprocessors from Intel that is based on its P6 architecture. Although it is based on the same architecture as the Pentium II, it lacks some high-performance features of the Pentium II line. Most notably, Celerons lack the level-2 cache. With clock speeds up to 466 MHz, these processors are attractive to power users at first glance, but they should be compared to the Pentium II's computing power in order to get an idea of their useful application.

   In iCOMP Index 2.0 benchmark tests, the Celeron processor generally achieved about 70% of the performance of a comparable Pentium II. A Celeron 300 MHz had an iCOMP score of 226, while the Pentium II 300 MHz had an iCOMP score of 332. Intel is marketing the processor as a chip for the basic PC. They view it as providing performance good enough for home and business users doing word processing and Internet surfing. Power users and serious gamers will want to think about spending more for the Pentium II's top performance.

 

Selected Links
Intel provides a Celeron Home Page

 

  CGI

Top Of Page

   The common gateway interface (CGI) is a standard way for a Web server to pass a Web user's request to an application program and to receive data back to forward to the user. When the user requests a Web page (for example, by clicking on a highlighted word or entering a Web site address), the server sends back the requested page. However, when a user fills out a form on a Web page and sends it in, it usually needs to be processed by an application program. The Web server typically passes the form information to a small application program that processes the data and may send back a confirmation message. This method or convention for passing data back and forth between the server and the application is called the common gateway interface (CGI). It is part of the Web's HTTP protocol.

   If you are creating a Web site and want a CGI application to get control, you specify the name of the application in the URL that you code in an HTML file. This URL can be specified as part of the FORMS tags if you are creating a form. For example, you might code:

   <FORM METHOD=POST ACTION=http://www.mybiz.com/cgi-bin/formprog.pl>

and the server at "mybiz.com" would pass control to the CGI application called "formprog.pl" to record the entered data and return a confirmation message. (The ".pl" indicates a program written in Perl but other languages could have been used.)

   The common gateway interface provides a consistent way for data to be passed from the user's request to the application program and back to the user. This means that the person who writes the application program can makes sure it gets used no matter which operating system the server uses (PC, Macintosh, UNIX, OS/390, or others). It's simply a basic way for information to be passed from the Web server about your request to the application program and back again.

   Because the interface is consistent, a programmer can write a CGI application in a number of different languages. The most popular languages for CGI applications are: C, C++, Java, and Perl.

   An alternative to a CGI application is Microsoft's Active Server Page (ASP), in which a script embedded in a Web page is executed at the server before the page is sent.

 

  Chassis

Top Of Page

   A chassis (pronounced TCHA-see or SHA-see) is the physical frame or structure of an automobile, an airplane, a desktop computer, or other multi-component device. Case is very similar in meaning, but tends to connote the protective aspect of the frame rather than its structure. People tend to choose one term or the other. The rest of this definition uses chassis but applies as well to the term case. Both terms (and casing) are derived from the Vulgate Latin for box.

   In a computer, the chassis houses the main electronic components, including the motherboard (with places to insert or replace microchips for the main and possibly specialized processors and random access memory (RAM) and places for adding optional adapters (for example, for audio or video capabilities). Typically, room is provided for a hard disk drive and a CD-ROM drive.

   The IBM PC chassis for its XT computers set an early de facto standard for a chassis configuration (sometimes referred to as the form factor). The desktop computer has since evolved through the AT model, the mini-AT, and the small-footprint PC. A later development was the vertical or tower chassis configuration, designed to be placed under a desk. The outer dimensions of a chassis are said to form its footprint.

   The term is not usually applied to mobile and notebook computers perhaps because the hardware components have to be more tightly integrated. Some communications devices such as terminal servers have chassis especially designed to handle many combinations of hardware add-ons. Such a chassis is described as modular.

 

Selected Links
Chassis Plans is one of a number of companies that make standard and custom-size computer chassis.

 

  Chip

Top Of Page

   "Chip" is short for microchip, the incredibly complex yet tiny modules that store computer memory or provide logic circuitry for microprocessors. Perhaps the best known chips are the Pentium microprocessors from Intel. The PowerPC microprocessor, developed by Apple, Motorola, and IBM, is used in Macintosh personal computers and some workstations. AMD and Cyrix also make popular microprocessor chips.

   There are quite a few manufacturers of memory chips. Many special-purpose chips, known as ASICs (application-specific integrated circuits), are being made today for automobiles, home appliances, telephones, and other devices.

   A chip is manufactured from a silicon (or, in some special cases, a sapphire) wafer, which is first cut to size and then etched with circuits and electronic devices. The electronic devices use CMOS technology. The current stage of micro-integration is known as Very Large-Scale Integration (VLSI). A chip is also sometimes called an IC or integrated circuit.

 

Selected Links
In addition to Intel, other large manufacturers of microchips include Texas Instruments, Hitachi, and IBM.

 

  Chipset

Top Of Page

   A chipset is a group of microchips designed to work and sold as a unit in performing one or more related functions. A typical chipset is the Intel 430HX PCIset for the Pentium microprocessor, a two-chip set that provides a PCI bus controller and is designed for a business computer that "optimizes CPU, PCI and ISA transactions for faster, smoother multimedia performance in video conferencing, playback, and capture applications." This chipset includes support for the Universal Serial Bus (USB).

 

Selected Links
The Intel chipset home page leads to details on all of Intel's chipsets.
For a comprehensive list of today's leading chipsets, see The Chipset Guide in Tom's Hardware Guide.

 

  Client/Server

Top Of Page

   Client/server describes the relationship between two computer programs in which one program, the client, makes a service request from another program, the server, which fulfills the request. Although the client/server idea can be used by programs within a single computer, it is a more important idea in a network. In a network, the client/server model provides a convenient way to interconnect programs that are distributed efficiently across different locations. Computer transactions using the client/server model are very common. For example, to check your bank account from your computer, a client program in your computer forwards your request to a server program at the bank. That program may in turn forward the request to its own client program that sends a request to a database server at another bank computer to retrieve your account balance. The balance is returned back to the bank data client, which in turn serves it back to the client in your personal computer, which displays the information for you.

   The client/server model has become one of the central ideas of network computing. Most business applications being written today use the client/server model. So does the Internet's main program, TCP/IP. In marketing, the term has been used to distinguish distributed computing by smaller dispersed computers from the "monolithic" centralized computing of mainframe computers. But this distinction has largely disappeared as mainframes and their applications have also turned to the client/server model and become part of network computing.

   In the usual client/server model, one server, sometimes called a daemon, is activated and awaits client requests. Typically, multiple client programs share the services of a common server program. Both client programs and server programs are often part of a larger program or application. Relative to the Internet, your Web browser is a client program that requests services (the sending of Web pages or files) from a Web server (which technically is called a Hypertext Transport Protocol or HTTP server) in another computer somewhere on the Internet. Similarly, your computer with TCP/IP installed allows you to make client requests for files from File Transfer Protocol (FTP) servers in other computers on the Internet.

   Other program relationship models included master/slave, with one program being in charge of all other programs, and peer-to-peer, with either of two programs able to initiate a transaction.

 

Selected Links
The Software Engineering Institute at Carnegie-Mellon University offers Client/Server Software Architectures - An Overview.

 

  Clock Speed

Top Of Page

   In a computer, clock speed refers to the number of pulses per second generated by an oscillator that sets the tempo for the processor.   Clock speed is usually measured in MHz (megahertz, or millions of pulses per second). A typical computer clock runs at several hundred megahertz.  The clock speed is determined by a quartz-crystal circuit, similar to those used in radio communications equipment.

   Computer clock speed has been roughly doubling every year.  The Intel 8088, common in computers around the year 1990, ran at 4.77 MHz.  Today's computers run at several hundred megahertz. 

   Clock speed is one measure of computer "power," but it is not always directly proportional to the performance level.  If you double the speed of the clock, leaving all other hardware unchanged, you will not necessarily double the processing speed.  The type of microprocessor, the bus architecture, and the nature of the instruction set all make a difference.  In some applications, the amount of RAM (random access memory) is important, too.

   Some processors execute only one instruction per clock pulse.  More advanced processors can perform more than one instruction per clock pulse.  The latter type of processor will work faster at a given clock speed than the former type.   Similarly, a computer with a 32-bit bus will work faster at a given clock speed than a computer with a 16-bit bus.  For these reasons, there is no simplistic, universal relation among clock speed, "bus speed," and millions of instructions per second (MIPS).

   Excessive clock speed can be detrimental to the operation of a computer.   As the clock speed in a computer rises without upgrades in any of the other components, a point will be reached beyond which a further increase in frequency will render the processor unstable.  Some computer users deliberately increase the clock speed, hoping this alone will result in a proportional improvement in performance, and are disappointed when things don't work out that way.

 

  Cluster

Top Of Page

   1) In personal computer storage technology, a cluster is the logical unit of file storage on a hard disk; it's managed by the computer's operating system. Any file stored on a hard disk takes up one or more clusters of storage. A file's clusters can be scattered among different locations on the hard disk. The clusters associated with a file are kept track of in the hard disk's file allocation table (FAT). When you read a file, the entire file is obtained for you and you aren't aware of the clusters it is stored in.

   Since a cluster is a logical rather than a physical unit (it's not built into the hard disk itself), the size of a cluster can be varied. The maximum number of clusters on a hard disk depends on the size of a FAT table entry. Beginning with DOS 4.0, the FAT entries were 16 bits in length, allowing for a maximum of 65,536 clusters. Beginnning with the Windows 95 OSR2 service release, a 32-bit FAT entry is supported, allowing an entry to address enough clusters to support up to two terabytes of data (assuming the hard disk is that large!).

   The tradeoff in cluster size is that even the smallest file (and even a directory itself) takes up the entire cluster. Thus, a 10-byte file will take up 2,048 bytes if that's the cluster size. In fact, many operating systems set the cluster size default at 4,096 or 8,192 bytes. Until the FAT32 support in Windows 95 OSR2, the largest size hard disk that could be supported in a single partition was 512 megabytes. Larger hard disks could be divided into up to four partitions, each with a FAT capable of supporting 512 megabytes of clusters.

   2) In information technology marketing and infrastructure terminology, a cluster is a group of terminals or workstations attached to a common control unit or server or a group of several servers that share work and may be able to back each other up if one server fails. As of mid-1997, a two-server Windows NT cluster in which each system could back up the other in case of failure was priced at about $23,000. (The cost of writing failure scripts, considered to be a sophisticated programming task, would be extra.)

 

  CMOS

Top Of Page

   CMOS is the semiconductor technology used in the transistors that are manufactured into most of today's computer microchips. Semiconductors are made of silicon and germanium, materials which "sort of" conduct electricity, but not enthusiastically. Areas of these materials that are "doped" by adding impurities become full-scale conductors of either extra electrons with a negative charge (N-type transistors) or of positive charge carriers (P-type transistors). In CMOS technology, both kinds of transistors are used in a complementary way to form a current gate that forms an effective means of electrical control. CMOS transistors use almost no power when not needed. As the current direction changes more rapidly, however, the transistors become hot. This characteristic tends to limit the speed at which microprocessors can operate.

 

  Cobol

Top Of Page

   COBOL (Common Business Oriented Language) was the first widely-used high-level programming language for business applications. Many payroll, accounting, and other business application programs written in COBOL over the past 35 years are still in use and it is possible that there are more existing lines of programming code in COBOL than in any other programming language. While the language has been updated over the years, it is generally perceived as out-of-date and COBOL programs are generally viewed as legacy applications.

   COBOL was an effort to make a programming language that was like natural English, easy to write and easier to read the code after you'd written it. The earliest versions of the language, COBOL-60 and -61, evolved to the COBOL-85 standard sponsored by the Conference on Data Systems Languages (CODASYL).

   Since the year 2000 (Y2K) problem is common in many business applications and most of these are written in COBOL, programmers with COBOL skills have become sought after by major corporations and contractors. A number of companies have updated COBOL and sell development tools that combine COBOL programming with relational databases and the Internet.

 

  Command Interpreter

Top Of Page

   A command interpreter is the part of a computer operating system that understands and executes commands that are entered interactively by a human being or from a program. In some operating systems, the command interpreter is called the shell.

 

  COM

Top Of Page

   COM (Component Object Model) is Microsoft's framework for developing and supporting program component objects. It is aimed at providing similar capabilities to those defined in CORBA (Common Object Request Broker Architecture), a framework for the interoperation of distributed objects in a network that is supported by other major companies in the computer industry. Whereas Microsoft's OLE provides services for the compound document that users see on their display, COM provides the underlying services of interface negotiation, life cycle management (determining when an object can be removed from a system), licensing, and event services (putting one object into service as the result of an event that has happened to another object).

COM includes COM+, DCOM, and ActiveX interfaces and programming tools.

 

Selected Links
Microsoft has a COM Home Page.
  Compiler

Top Of Page

   A compiler is a special program that processes statements written in a particular programming language and turns them into machine language or "code" that a computer's processor uses. Typically, a programmer writes language statements in a language such as Pascal or C one line at a time using an editor. This file contains what is called the source statements. The programmer then runs the appropriate language compiler, specifying the name of the file that contains the source statements.

   When executing (running), the compiler first parses (or analyzes) all of the language statements syntactically one after the other and then, in one or more successive stages or "passes", builds the output code, making sure that statements that refer to other statements are referenced correctly in the final code. Traditionally, the output of the compilation has been called object code or sometimes an object module. (Note that the term "object" here is not related to object-oriented programming.) The object code is machine code that the processor can process or "execute" one instruction at a time.

   More recently, the Java programming language, an object-oriented language, has introduced the possibility of compiling output (called bytecode) that can run on any computer system platform for which a Java virtual machine or bytecode interpreter is provided to convert the bytecode into instructions that can be executed by the actual hardware processor. Using this virtual machine, the bytecode can optionally be recompiled at the execution platform by a just-in-time (JIT) compiler.

   Traditionally in some operating systems, an additional step was required after compilation - that of resolving the relative location of instructions and data when more than one object module was to be run at the same time and they cross-refered to each other's instruction sequences or data. This process was sometimes called linkage editing and the output known as a load module.

   A compiler works with what are sometimes called 3GL, 4GL, and 5GL languages. An assembler works on programs written using a processor's assembler language.

 

  C++

Top Of Page

   C++ is an object-oriented programming language that is now generally viewed as the best language for creating large-scale application programs. C++ is a superset of the C language.

   A related programming language, Java, is based on C++ but optimized for the distribution of program objects in a network such as the Internet. It is somewhat simpler than C++ and has characteristics that give it other advantages over C++.

For those learning object-oriented programming, MIT offers an Introduction to Object-Oriented Programming Using C++.

 

  CPU

Top Of Page

   CPU (central processing unit) is an older term for processor and microprocessor, the central unit in a computer containing the logic circuitry that performs the instructions of a computer's programs.

 

  Cyber

Top Of Page

   "Cyber" is a prefix used to describe a person, thing, or idea as part of the computer and information age. Taken from kybernetes, Greek for "steersman" or "governor," it was first used in cybernetics, a word coined by Norbert Wiener and his colleagues. Common usages include cyberculture, cyberpunk, and cyberspace. Terms with this prefix are being coined so rapidly that there will soon be a need for a cyberdictionary.

 

______________________________________________________________

Designed By Wessam Sherif, All Rights Reserved.