What is hard disk?

Your Hard Disk or sometimes called a Hard Drive, is the main storage space inside your PC. It is a permanent storage component unlike the RAM (system memory). It is possible for your computer to function without a hard disk, however it would be basically useless to you as you would not be able to use an Operating system or have any programs to use.

Hard disks use a circular hard platters to store data on. They are in pristine condition with a mirror like finish to them. These platters are locked away inside a steel casing as unclean air can easily ruin a hard disk. This is why you should never remove the casing from the hard disk, it is very unlikely you will be able to put it back together as a working component.

Above you can see a labelled diagram of a hard disk. The model is a SCSI (Small Computer Scientific Interface) You can see the hard platters on top of each other with a set of arms which hold the read/write head. The speed of the arm is truly amazing as well as the accuracy of the head which can read and write to perfection on a platter which is rotating around 7200Rpm. The hard disk looks a very simple idea and probably is, however a lot goes on before the simple writing to the disk its self. We will explain a little more later in the article.

How does the hard disk store data?

On each of the platters there is a thin layer of magnetic film. Data storage on hard disks is very similar to that of a cassette tape. Data is stored in many 1's and 0's. These binary digits are arranged in different ways to represent different characters. When these are read back by the head the data is retrieved and processed.

File Systems

A file system is the way in which your computer stores data on the hard disk. The most common file systems are FAT16 for older computers, FAT32 and NTFS. FAT stands for File Allocation Table. NTFS stands for NT File System. Both have advantages and disadvantages. FAT16 was a very limited file system in the way that it would store data very in-efficiently, every file would take up a minimum of 32Kb in space as this was the minimum cluster size in a FAT16 system. Also it was only capable of using hard disks upto 2Gb in size. FAT32 solved this problem by reducing the cluster size to 4kb which saved a lot of wasted space and also allowed disk sizes up to 2 Terra bytes. NTFS is believed to be a far greater file systems than any of the FAT's. The cluster sizes can be altered to anything as low as 512bytes which means almost no wasted space on the hard disk. The maximum disk size is a unbelievable 18.5 x 10¹º bytes, which is very big !!!. NTFS also has added security for file loss.

Measuring the Speed of a Hard Disk

There are various ways of measuring the speed of the hard disk. The main ones are the maximum data transfer rate, the spindle rotation speed and the seek time.

Maximum Transfer Rate - This is the highest amount of data that can be transferred per second. Common forms of hard disks come with an ATA format. the speed rating of an ATA100 disk would be 100Mb/s. Likewise a ATA66 disk would be able to transfer a maximum of 66Mb/s.

Spindle Rotation Speed - The rotation speed of the disk really is the basis of the other two factors of hard disk speed. The faster the rotation speed, the more data can be written per second and the quicker it is to find the correct data on the platter.

Seek Time - The seek time of a hard disk is the average time it takes for the disk to find the data you need on the platters. A fast spinning, highly accurate and responsive disk will have a shorter seek time and will perform much better, especially when the data is scattered around the disk.

Disk Fragmentation

All versions of Windows come with a disk defragmenter. But what exactly is fragmentation? As you use your computer daily files get written and deleted constantly from the disk, either by yourself or by the operating system creating and removing temporary files. This process leave the disk in no sort of order and when new file are written to the disk they start to get written in the gaps on the disk. The fact that single files are written in different parts of the disk means that the disk has to go round the disk reading different parts instead of just streaming the data straight off the disk. This is called Fragmentation. The defrag program within windows sorts out the files into order again to make the disk perform faster.

 

 

Connection Types

There are currently 3 connections for a hard disk. IDE (or ATA) SCSI and Serial ATA. The most common is the IDE interface. This provides an 80 pin connection to most standard motherboards and you can't normally go wrong buying an IDE drive for your machine. SCSI connections often require extra hardware unless its built into your motherboard. SCSI hard disks are often faster but more expensive than there IDE counterparts. The final type is the newest type. Serial ATA does away with parallel data transfer which has its problem of large wires and electrical interference. The Serial ATA standard is more reliable and uses smaller un-obtrusive wires. Smaller wires also means better air flow for your case.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

What is a Motherboard?

The motherboard is the main circuit board inside your PC. Every components at some point communicates through the motherboard, either by directly plugging into it or by communicating through one of the motherboards ports. The motherboard is one big communication highway. Its purpose inside your PC is to provide a platform for all the other components and peripherals to talk to each other.

Types of Motherboards

The type of motherboards depends on the CPU it was designed for. You can therefore categorise motherboards by which socket type they have. e.g. Socket A, Socket 478 etc. The Type of motherboard you buy is very important, as it will need to house your CPU, and they are not interchangeable. When buying a motherboard, it will always tell you what socket type it has.

What to look for when buying a motherboard

As everything you have on the PC at some point needs the motherboard, you need to consider these components when buying a motherboard. Foe example, if you have a lot of devices with a PCI interface that you wish to use, there is little point buying a motherboard that only offers you 3 PCI slots. Like wise with memory, you have to make sure that there is enough slots for the amount of memory you have or wish to have.

The motherboard also needs the correct type of interface for your Memory, Graphics card, Hard disks and other items as well. You will find that most motherboards offer everything you need however it needs checking on when buying. Its especially important to pay detail to your motherboard if you want to use older components, which a new motherboard may or may not support.

The major difference between motherboards that support the same CPU is the model of the chipset (more on the chipset later). Different chipsets offer different performance and different features in terms of memory support, AGP port speed, Multiplier numbers, Bus speeds and much more.

 

 

 

Measuring the speed of a motherboard

Motherboards have got to be one of the hardest components to measure the speed of. Performance can really only be measured by benchmarking using the same components in several motherboards of the same type. You often find that motherboards with the same chipset have roughly the same performance in real world tests. The minor differences that do occur are down to the quality of the materials used and the quality of the manufacturing.

The Motherboards speeds that are quoted on the box are maximum supported speeds for other components. For example motherboards will quote the maximum FSB (Front Side Bus) speed. However without a CPU that also supports this speed, it will never be reached. Likewise when it quotes the maximum memory speed. The memory of this speed has to be present.

What is a Motherboard Chipset

A motherboard chipset controls all the data that flows through the data channels (buses) of the motherboard. The primary function of the motherboard chipset is to direct this data to the correct area's of the motherboard, and therefore the correct components.

Components of a Motherboard

The motherboard contains many connections for all type of components. Motherboards contain expansion slots such as the ISA, PCI, AGP and DIMM sockets. It also contains external connections for your onboard sound card, USB ports, Serial and Parallel ports, PS/2 ports for your keyboard and mouse as well as network and Firewire connections.

So the motherboard has a massive part to play in the workings of your PC. Components that you buy all rely on the motherboard to have the correct connections are available and working. Its best to buy a decent motherboard especially if you plan on buying extra's in the future.

What does your Motherboard Chipset actually do?

We have all heard about the latest chipsets from Intel, VIA, Nvidia and more and how much better they are. But what does the chipset actually do on the motherboard. We know what the CPU does, we know what the graphics card is for and why we have a hard disk drive but not many people know too much about the chipset. Hopefully we can shed a little light on the working of the chipset and why they differ from one chipset to the next.

 

 

 

 

 

North and South?

The chipset consists of two major microchips. These are known as the North bridge and the South Bridge. The North Bridge Handles data for the AGP Port and the main memory which includes the FSB (Front side bus). Although both chips are required for the PC to work the North Bridge handles most of the very important tasks such as the connection between the CPU and main memory. The South Bridge handles data from the PCI and ISA slots and can also have integrated components such as Audio codec's etc.

The North and South bridges will have different chip names even though they are very often paired with the same opposite bridge to come under the collective name of the chipset. Below is a diagram of the KT600 chipset from VIA technologies. This diagram shows how the components of your PC are connected to the chipset.Chipset diagram

Catch the Bus

The function of a chipset is to manage data throughput. All data that your components require or acquire need to be transported. They are transported by what's known as a bus. The bus carry's the data to where it needs to go via the chipset. The exception to rule here is the BSB (Back side bus) the BSB is the bus between the CPU and the cache memory. Today's CPU's have the cache memory "on-chip" and so there is no need to go through the chipset.

The BSB is not to get confused with the main memory bus. The BSB only dictates the speed between the CPU and the cache memory. The memory bus speed is on a different bus and this speed can be changed independently. Excluding the BSB all other buses go through the chipset in order to get direction for where it needs to take the data. Because of the amount of data that goes through the chipset it is important that the chipset is up to speed.

What Chipset?

Since we now know that the chipset handles an incredible amount of data, its important to see which chipsets are performing the best. Firstly to choose a chipset that supports your CPU. You obviously cant have a chipset designed for an Intel CPU if your using an Athlon XP. Then the best way to see which chipsets are performing the best is to look at benchmarks on various sites. A slow chipset can be as damaging to your systems overall speed as a slow CPU or slow memory. The slowest component always dictates the overall speed at any given time. If you have a poor performing chipset, then any time that your computer is sending and receiving data from the graphics card or main memory, then the system is struggling

Does it affect your graphics speed?

Like stated above the the chipset is responsible for directing data from the AGP bus. So it does affect the graphics performance of your machine. But it also affects it in another way. You may notice that when you purchase a graphics card it will state on it what AGP Bus it can use i.e. 1x, 2x, 4x or 8x (I'm sure more will follow in the future) This is how many channels the AGP bus can use to transmit data from the graphics card to the main memory. Support from the chipset to have 8 channels to the graphics card allows the graphics card to transmit greater amounts of data per second. The chipset does not alter the speed of the transmission it is 66Mhz (AGP standard).

 

 

 

 

 

 

What is a CPU?

This is a beginners guide to CPU's. Please use the articles menu if you require more in depth articles. The CPU is often called the main processor of your PC. CPU stands for Central Processing Unit. When you are buying a PC from a high street store the main selling feature is always the speed of the CPU. You will notice it will say 2.4Ghz or 3Ghz PC. The truth is that the CPU is set at these speeds and has no relevance on what else you could have inside your PC.

What does the CPU look like?

The CPU will likely be the larger of the chips on the mainboard inside your PC. If you have bought the PC from new the CPU will be covered by a heatsink and fan. The heatsink and fan are extremely important. Without these the CPU would get too hot to work and possibly melt or burn out. Be very careful to replaces the fan if you remove it to have a look at your CPU.

What does the CPU do?

The CPU is the main processor of your PC. Everything that goes on in your PC at some point goes through your CPU. In reference to the human body the CPU is brain of the PC. It is artificial thinking but the CPU is where all the logic is applied. As a very basic example computer code is basically mathematics. if you wanted to calculate 2+7 you would need an input an output and a processor to add the logic. The logic in this case is simple addition. You would input 2+7 on a keyboard. This would be registered and sent to the CPU for analysis. The CPU would see that the addition logic is required and use this inbuilt logic to send the answer of 9.

 

 

 

How do you measure the speed of a CPU?

 

 

The CPU's speed is a measure of MHz (megahertz) or more recently GHz (gigahertz). a chip with a megahertz rating of 900Mhz would be able to complete 900Million cycles every second. However don't be fully deceived by this figure alone. This figure only shows how many clock cycles the CPU can do in a second. How much being done in each cycle is another matter. I urge you to check out some benchmarks on CPU's before you you decide that the one with a faster rating has the best performance. Unfortunately the need for speed and higher clock rating have driven the CPU industry to work on this factor without really seeing what the performance of these chips are like.

Types of CPU

There are two main desktop CPU manufacturers, they are Intel and AMD. Both of these companies have a power CPU and a Budget CPU. The Power CPU's are the Pentium 4 from Intel and the Athlon XP from AMD. The budget CPU's are the Celeron from Intel and the Duron from AMD. Price is a big factor between these CPU's. Check the latest prices of these CPU's below.

AMD Athlon 64
AMD Athlon XP (T-Bred or Barton)
AMD Duron
Intel Pentium 4
Intel Celeron

 

 

 

 

 

 

 

 

 

Socket Types

Each range of CPU fits into a specific socket on your motherboard. motherboards are design with one socket type and cannot be made to take another. Current AMD CPU's use a socket A connection. Pentium 4 CPU's use socket 478 (because the CPU has 478 pins to connect to the motherboard)
 

Socket Type

Compatible Processors

Socket 7

Original Pentiums, Cyrix 686, Cyrix MII, AMD K6, AMD K6-2 and K6-III

Socket 370

Intel Celeron, Intel PIII (not Cartridge), Cyrix III

Slot 1

Intel PII, Intel PIII (cartridge only)

Slot A

AMD Athlon (Cartridge only)

Socket A

AMD Athlon Thunderbird (not Cartridge), AMD Duron, AMD Athlon XP

Socket 423

Intel P4

Socket 478

Intel P4 (2nd Gen)

754-Pin Socket

Athlon 64

940-Pin Socket

Athlon 64-fx

The CPU's Cache

The Cache on the CPU is a small amount of very fast memory which is situated on the CPU. the cache memory is very expensive which is why its available in very limited amounts. It ranges from about 64Kb to 512Kb and soon 1Mb cache chips will be coming.

Central Processing Unit Cache Memory

What is the CPU Cache?

The cache on your CPU has become a very important part of today's computing. The cache is a very high speed and very expensive piece of memory, which is used to speed up the memory retrieval process. Due to its expensive CPU's come with a relatively small amount of cache compared with the main system memory. Budget CPU's have even less cache, this is the main way that the top processor manufacturers take the cost out of their budget CPU's.

 

 

 

How does the CPU Cache work?

Without the cache memory every time the CPU requested data it would send a request to the main memory which would then be sent back across the memory bus to the CPU. This is a slow process in computing terms. The idea of the cache is that this extremely fast memory would store and data that is frequently accessed and also if possible the data that is around it. This is to achieve the quickest possible response time to the CPU. Its based on playing the percentages. If a certain piece of data has been requested 5 times before, its likely that this specific piece of data will be required again and so is stored in the cache memory.

Lets take a library as an example o how caching works. Imagine a large library but with only one librarian (the standard one CPU setup). The first person comes into the library and asks for Lord of the Rings. The librarian goes off follows the path to the bookshelves (Memory Bus) retrieves the book and gives it to the person. The book is returned to the library once its finished with. Now without cache the book would be returned to the shelf. When the next person arrives and asks for Lord of the Rings, the same process happens and takes the same amount of time.

If this library had a cache system then once the book was returned it would have been put on a shelf at the librarians desk. This way once the second person comes in and asks for Lord of the Rings, the librarian only has to reach down to the shelf and retrieve the book. This significantly reduces the time it takes to retrieve the book. Back to computing this is the same idea, the data in the cache is retrieved much quicker. The computer uses its logic to determine which data is the most frequently accessed and keeps them books on the shelf so to speak.

That is a one level cache system which is used in most hard drives and other components. CPU's however use a 2 level cache system. The principles are the same. The level 1 cache is the fastest and smallest memory, level 2 cache is larger and slightly slower but still smaller and faster than the main memory. Going back to the library, when Lord of the Rings is returned this time it will be stored on the shelf. This time the library gets busy and lots of other books are returned and the shelf soon fills up. Lord of the Rings hasn't been taken out for a while and so gets taken off the shelf and put into a bookcase behind the desk. The bookcase is still closer than the rest of the library and still quick to get to. Now when the next person come in asking for Lord of the Rings, the librarian will firstly look on the shelf and see that the book isn't there. They will then proceed to the bookcase to see if the book is in there. This is the same for CPU's. They check the L1 cache first and then check the L2 cache for the data they require.

Is more Cache always better? 

The answer is mostly yes but certainly not always. The main problem with having too much cache memory is that the CPU will always check the cache memory before the main system memory. Looking at our library again as an example. If 20 different people come into the library all after different books that haven't been taken out in quite a while but the library has been busy before and so the shelf and the bookcase are both full we have a problem. Each time a person asks for a book the librarian will check the shelf and then check the bookcase before realising that the book has to be in the main library. The librarian each time then trots off to get the book from the library. If this library had a non cache system it would actually be quicker in this instance because the librarian would go straight to the book in the main library instead of checking the shelf and the bookcase.

As the fact that non cache systems only work in certain circumstances and so in certain applications CPU's are definitely better with a decent amount of cache. Applications such as MPEG encoders are not good cache users because they have a constant stream of completely different data.

Does cache only store frequently accessed data?

If the cache memory has space it will store data that is close to that of the frequently accessed data. Looking back again to our library. If the first person of the day comes into the library and takes out Lord of the Rings, the intelligent librarian may well place Lord of the Rings part II on the shelf. In this case when the person brings back the book, there is a good chance that they will ask for Lord of the Rings part II. As this will happen more times than not. It was well worth the Librarian going to fetch the second part of the book in case it was required.

Cache Hit and Cache Miss

Cache hit and cache miss are just simple terms for the accuracy of what goes into the CPU's cache. If the CPU accesses its cache looking for data it will either find it or it wont. If the CPU finds what's its after that's called a cache hit. If it has to go to main memory to find it then that is called a cache miss. The percentage of hits from the overall cache requests is called the hit rate. You will be wanting to get this as high as possible for best performance.

CPU Temperatures and Cooling

The cooling of the CPU is one of the most important things you have to do. Choosing the right cooling method for the CPU you have could be essential. The CPU itself will have an ideal working temperature. You may or may not be surprised to know that the warmer a CPU becomes the slower it actually performs until it stops altogether. Also bad cooling could lead to permanent damage to your CPU and Motherboard. So what are the ways of cooling your CPU and other components. Well we have the two easiest methods to start with, which are the most popular. These are the heatsink and standard fan.

The Heatsink

The Heatsink is a simple piece of either copper or Aluminium which sits onto of the Processor chip. The idea is that the heat sink transfers the heat from the CPU or GPU or even chipset and and disperse it into the air. Heatsinks are normally coupled with a fan to aid in removing the excess heat.

When using a heatsink always use a thermal paste (sometimes called goop). This thermal paste fills the small holes which appear in all solid materials. In this case the imperfections in the heatsink and the CPU are filled with the paste and this greatly improves the heat transfer. If you find that your Heatsink actually gets hotter when you use thermal paste, you are correct. The paste has increased the heat transfer and so the heat is now in the heatsink and not the CPU. This is much better for the Performance of the System as a whole.
 

CPU and Case Fans

CPU, GPU and chipset fans are almost always attached to heatsinks. The idea is the heatsink removes the heat from the CPU and the fan blasts the heatsink with the surrounding air cooling it down. The reason that you don't want to have the fan blowing air from the heatsink as you may expect is that the hot air would simply be blasted into the case heating up the other components. If you wish to have a set up where the air is removed from the heatsink you will want a duct attached to the fan which runs to the outside of the case, this is a very effective way of removing the hot air away from the case altogether. Case fans provide this function as well. The idea of the case fan (sometimes called exhaust fans) is to remove the warm air in the case which is generated by all components. Once the hot air has been expelled from the case, cooler air is drawn in. Again this is healthier for the system overall.

Water Cooling

When we are talking of advanced cooling we are talking of water cooling. Now those of you that are into overclocking and high system performance may have already heard of water cooling. Those of you however that haven't may well be thinking what a bad idea it is to have water running round an electrical system. I am not going to say that water in your system is not dangerous because it obviously is. However when you using water cooling you should use de-ionized water. De-ionized water is a very bad conductor of electricity and so should give your system a chance of survival should a bit leak onto the circuitry.

Water is far better at absorbing heat than air. For this reason passing water over the heatsink will mean that the water will pick up a lot of the excess heat, a lot more than air being blasted onto it would. The water is then taken away from the system to cool and pumped back again to the CPU. This method of cooling is very efficient. If you are thinking of using a Peltier cooler (see below) then I would definitely recommend setting up a water cooling system.

Expense does play a part in water cooling. the parts are still quite expensive, although they are not as bad as they once were. If you are serious about having a performance system or are into overclocking your components then this is the way to go.

Peltier Cooling

Peltier cooling is definitely the odd one of the bunch. The Peltier cooling system actually creates heat. Slightly odd you may rightly presume. But the reason for this is that the Peltier cooler uses electricity. It looks like a standard heatsink, however there are two sides to the plate. Heat is electronically "pumped" from one side of the plate to the other away from the CPU. The hot side of the plate can get very hot indeed, a great deal hotter than a standard heatsink. This is why we suggest water cooling with a Peltier system. The upside of all this heat is the part you actually want to cool, the CPU, remains very cold. I think efficient is the wrong word for the Peltier system. It takes power and creates heat so its not really efficient. However it is excellent at the job its supposed to do, which is keep the CPU cool.

Time for the disclaimer and nasty information. As you are aware as with all computer modding and overclocking, things can go wrong if you are not careful. I cannot be held responsible for actions you may take when using these cooling systems. Please be aware that especially with Peltier systems, to have hardware monitoring available. Any component breakdown could fry your whole system.  When done properly these systems can greatly increase performance. Always follow standard safety procedures.

Why Cool your CPU

Why does your CPU need cooling? OK you know it gets hot without cooling but do you know what happens if you don't use adequate cooling. With different CPU's different results can happen, some annoying some expensive. The basics are that heat equals slower CPU performance and possible damage.

Older CPU's used to be made of simply transistors. Now CPU's contain Transistors, Capacitors and Resistors. Resistors produce a lot of heat and this needs removing as quick as possible. If excess heat remains on the chip elektromigration or oxide breakdown can occur which can lead to crashes and CPU failure. 10 degree's extra heat on a CPU half's its life span. A further 10 degree's halves the life of the CPU again. You may think that you don't keep your CPU's for that long but life span is only one thing, the performance and stability is another.

Overclocking, How does this effect the heat issue?

Overclocking your CPU does cause extra heat to be produced. The amount of extra heat depends on what type of overclocking you do. If you increase the CPU's frequency by increasing the FSB then the extra heat will increase linearly. If however you have to increase the voltage the excess heat will be the square of the voltage increase. Simply put increasing the voltage will create far more heat than simply overclocking the Bus speed. However many overclockers found that increasing the voltage is the way to keep the system stable. 

I have a fan what else can I do?

Firstly you can take a look at improving your cooling solution. This could be simply a bigger and better fan. A heatsink with better spread technology. You may also want to consider Case fans and sorting out your wires in your case to allow better airflow.

The other option is a software option. Programs like Rain, Waterfall and CPU idle help keep your CPU cool by sending it to sleep. This is done by sending the HLT (Halt) command to the CPU, this command sends the CPU into suspend mode saving power and gives the heatsink/fan extra time to disperse the heat. The command is only sent during empty CPU cycles so performance is not compromised.

There is debate on whether these programs actually do your CPU any good, as the program has to re-issue the command constantly therefore putting undue stress on your CPU and reducing its life. My personal view on this is that I can see why there are these arguments but the advantages of heat being dispersed greatly out weigh the simply single command constantly being sent. Other items such as the system clock etc. Use many CPU cycles and don't take up much CPU power. I do use Waterfall myself and see an average of 75% power saving with no bad experiences. There will always be debate, pro's and con's. This will have to be one you decide for yourself.

CPU Maximum Temperatures

Following on from the cooling article we look at the maximum temperatures of CPU's. Even though we use fans and heatsinks CPU temperatures can still raise if they are being used at full power for a long period of time. Using CPU's at high temperatures can lower cause system crashes in the short term and in the long term cause the life of your CPU to be greatly reduced. In extreme cases your CPU could burn out or melt onto the motherboard. This usually happens when a fan breaks down and goes unnoticed. Today's motherboards come with temperature monitoring software and hardware which actually shuts the computer off the CPU temperature gets too hot. Even these however are not 100% fail proof. The only way to be sure is to check your fans and other cooling equipment regularly and also use CPU thermometers to check your CPU temperature is stable and not raising over time. CPU's have a rated maximum temperature sometimes called a critical temperature. What this boils down to (quick pun :} ) is what the manufacturer states is the maximum temperature the CPU will operate at. This is not to say you want your CPU to operate at this temperature as it will be borderline between working and burning out. Always try to keep at least 20C below this value if you can. Below is a table showing you the critical temperatures for most of the CPU's we use today.

Please be aware that as faster models are released even under the same name the thermal requirements may change. this table is meant for a guide only.

 

 

 

 

 

 

 

CPU

Critical Temperature

AMD Athlon Series

 

AMD Athlon (socket) upto 1Ghz

90°C

AMD Athlon (slot) all speeds

70°C

AMD Athlon Thunderbird 1.1Ghz+

95°C

AMD Athlon MP 1.33Ghz+

95°C

AMD Athlon XP 1.33Ghz+

90°C

AMD Athlon XP T-Bred upto 2100+

90°C

AMD Athlon XP T-Bred over 2100+

85°C

AMD Athlon XP Barton

85°C

AMD Athlon 64

70°C

 

 

AMD Duron Series

 

AMD Duron upto 1Ghz

90°C

AMD Duron 1Ghz+

90°C

AMD Duron Applebred

85°C

 

 

AMD K6 Series

 

AMD K6/K6-2/K6-III (All except below)

70°C

AMD K6-2/K6-III (model number ending in X)

65°C

AMD K6-2+/K6-III+

85°C

 

 

Intel Pentium III Series

 

Pentium III Slot 1 500-866Mhz

80°C

Pentium III Slot and socket 933Mhz

75°C

Pentium III Slot 1 1Ghz

60-70°C depending on model

Pentium III Slot 1 1.13Ghz

62°C

 

 

Intel Celeron Series

 

Intel Celeron 266-433Mhz

85°C

Intel Celeron 466-533Mhz

70°C

Intel Celeron 566-600Mhz (Coppermine)

90°C

Intel Celeron 633-667Mhz

82°C

Intel Celeron 700Mhz+

80°C

 

 

Intel Pentium II

 

Pentium II 1st Generation

72-75°C

Pentium II 2nd Generation 266-333

65°C

Pentium II 350-400Mhz

75°C

Pentium II 450Mhz

70°C

 

 

 

Pentium 4

If you wondering where the stats are for the Pentium 4 CPU's I simply did not put them in because I believe any information I find on these will not be accurate. The reason being the P4 chips come with a clever step down facility which allows the CPU to slow down when it gets hot. The CPU will run slower the hotter it gets. The theory is that the CPU will never burn out even without a heatsink/fan. It will just run run at an unbelievably slow pace. However the developers papers show that the max temperature is 80°C. Again the theory is that this temperature should never be reached unless the safety step down feature should fail or be disabled.

To Illustrate this Toms Hardware guide made a short video. This really is a must see. It shows you what happens to to CPU's should the heatsink/fan is removed. The results are very interesting. Get the video here (9.63Mb)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

What is RAM?

RAM, short for Random Access Memory is the short storage area for your PC. Often just called memory or system memory RAM is just an electromagnetic storage that loses all its data once the power has been removed. RAM is used by your operating system and other programs and games in order to store data that is required at speed. Computers with large amount of memory often perform faster simply because the CPU is not idle waiting for data to be retrieved from slower components such as the hard drive and can be stored in memory.

Why is RAM considered Random Access

RAM can be accessed by the computer for any piece of data in any cell of the memory bank. There is no need to go through the entire memory bank to get to the data you require. The opposite to Random access is serial access. SAM or Serial Access Memory needs to be read from the start to get to the data you require. An example of SAM storage would be a cassette tape. Of course there are things which fit in-between these two examples such as a CD or DVD. Because these have tracks they can be random access to a point but then need Serial access to get to the exact spot required.

How does RAM store data

RAM consists of many capacitors and transistors. A capacitor and a transistor are paired together to make a memory cell. The capacitor represents one "bit" of data, the transistor is able to change the state of the capacitor to either a 0 or a 1. the Zero's and ones when read in a sequence represent the code which the computer understands. This is called binary data because there is only two states that the capacitor can be in.

In order for a capacitor to achieve a value of 1 it needs to be filled with electrons. To achieve a value of 0 the capacitor needs to be emptied of electrons. You may of noticed when purchasing RAM that many types of RAM you buy are called DRAM or SDRAM. DRAM or Dynamic Random Access Memory has a small problem with the capacitors holding a value of 1. It is called dynamic RAM because its state of 1 or 0 needs to be constantly refreshed on order to stay in the correct state. In order to demonstrate this and for a much more detailed look into the workings of RAM a good site site is How stuff works.

There is a type of RAM that doesn't have to be refreshed constantly. This is called SRAM or Static RAM. Static RAM uses a type of Flip Flop to hold the data in the cell. This can take around 6 transistors on a chip per cell rather than just the one., The consequences are that Static RAM needs more chips per Mb than DRAM and therefore is much more expensive.

Types of RAM

Down the years the face of RAM has changed dramatically in the early days of computing we had the SIMM. SIMM was a Single Inline Memory Module. Moving on the next logical step was the DIMM, Double Inline Memory Module. The introduction of DIMM's brought with it  new speeds and sizes to give computers more power than ever before. The big advances in recent years has been the introduction of DDR RAM (Double Data Rate). DDR effectively doubled the speed that RAM could transfer data without actually increasing the MHz. E.g. a stick of PC133 (133MHz) RAM with DDR would effectively transmit data at 266MHz but still have a bus speed of 133Mhz.

The other big memory type of the recent past is RAMBUS memory. Rambus memory uses a memory module called a RIMM (RAMBUS Inline Memory Module) RIMM's are specials high speed memory chips working upto 800Mhz. They require special motherboard support and are priced higher than that of standard memory modules.

Measuring the Speed of RAM

Like CPU's RAM is measured in MHz. The higher the MHz the greater the speed of the RAM. To keep this calculation simple when dealing with DDR RAM, retails often simply put the effective MHz rating. For example a 133Mhz DDR module would be advertised as 266Mhz. That is basic speed test of the memory module. However there is another important factor in high performance memory and that is CAS latency.

CAS meaning Column Access Strobe, the basics of CAS latency are that CAS latency is the amount of clock cycles it takes for the response from the memory from a query. Shorter is obviously better. CAS-2 is a 2 cycle delay and CAS-3 a 3 cycle delay.

 

 

 

 

What is a Graphics Card?

The graphics card is responsible for delivering the image you see on your PC monitor. Its GPU (Graphics Processing Unit) processes the machine code and changes it into a signal to the monitor. There are many factors to a graphics card. Choosing one can be a tricky business these days as there is so much technology that is different in each new graphics card release. More of these later in the article

Graphics Acceleration

When PC first came and for some time after, the graphics cards purpose was only to display the image on the screen. The amount of memory you got on a graphics card was very small and was not needed to a great extent. Today's graphics cards do more than just display an image, they help the processor with the job of processing when it comes to the graphics. The graphics card would in effect accelerate the process of displaying the image on screen.

This was needed when the 3D gaming world took the centre stage. The speed required to process the images on screen at 60 frames per second and process the code for the game itself was simply too much for a CPU to handle on its own and so the games would simply crawl along at a very slow pace. The graphics card would use some of its own built in instruction logic to added things such as textures lighting effects, fog effect and bump mapping to give a far more detailed picture. Also the speeds of graphics cards have improved a great deal in order to let these effects be used without the problem of the frame rate dropping.

 

 

AGP or PCI

Two types of Graphics card available today are the AGP and PCI versions. The AGP (Accelerated Graphics Port) is the most common because it offers the best capacity for greater bandwidth to and from the main memory. The PCI (Peripheral Component Interconnect) version is usually used by people who either do not have a AGP slot on their motherboard, or who have some sort of dual display setup with two graphics cards and choose on boot up which to use. AGP however looks like it will remove the need for PCI graphics cards in the near future.

How do you measure the speed of a graphics card?

Measuring the speed of the graphics card is a lot more difficult than with the CPU or RAM or even the hard disk. There are many factors which affect how quickly the graphics card can do its job. Many of these only come into play when the graphics card is undertaking certain tasks.

Core clock speed - Much the same as the way you measure the speed of a CPU. The core speed of the Graphics card is measured in MHz and represents the amount of clock cycles the graphics process can do per second. This is a good but not definitive way of telling how fast the graphics card is.

Memory clock speed - Exactly the same of as the core clock speed, except of course that it is for the memory of the graphics card and not the core. This is just as important as the core speed as the memory contains textures that need to be applied to the pixels.

Pixel Pipelines - The amount of pixel pipelines a graphics card has can have a great impact on the speed of the image rendering. This is all about pixel pushing power. A card with 8 pipelines can process twice as many pixels as a card of the same core speed and 4 pipelines.

Textures per pipeline - This only come into effect when multiple textures are needed on the one pixel. Simply put if a multiple texture is needed, then a graphics card with more textures per pipeline will be quicker. On single textured pixels the amount of texture per pipeline will have no effect.

There are more smaller things such as T&L technology anti-aliasing and various other quality increasing and speed increasing technology that different cards have. I wont go into them all here as there are a great many between all the cards on the market.

Memory Bandwidth

The memory bandwidth is the rate of data transferred from the GPU to the graphics memory. This has been one of the biggest bottle necks in graphics cards for years. Newer cards are overcoming this problem with expensive memory solutions at high speeds. The higher the memory bandwidth the better the graphics card will be able to retrieve data and textures from the graphics memory. As this is a real bottle neck this is a really important feature of the graphics card.

CPU Comparison

This article is a simple page put together so you can see the differences between the CPU's in production today. If you wish to check the prices we have added a deep link to Kelkoo which compares prices from various retailers and the end of each row.

Below is a table containing data from the individual Product review pages about the CPU's bus speeds and cache etc. You can click on the name of the CPU to take you to that review page or click on the heading to take you to the Jargon page to explain the meaning. 

CPU

Bus Speed 
(FSB)

L1 Cache

L2 Cache

Micron technology

Tran-sistors

Form factor

Voltage

Price

AMD Athlon (original)

200 MHz
(100*2)

128K (64K Instructions 64K Data)

512K (Running at either 1/2, 2/5 or 1/3 or CPU frequency depending on CPU speed)

.25/.18

22
Million

Slot A

1.6v

No Longer for general sale

AMD Athlon Thunderbird

200Mhz  (100*2)

128K (64K Instructions 64K Data)

256K On Chip

.18

37
Million

Socket A

1.75v

Compare

AMD Athlon XP

200/266Mhz  (100*2/133*2)

128K (64K Instructions 64K Data)

256K On Chip

.18

37.5
Million

Socket A

1.75v

Compare

AMD Athlon XP T-bred

266/333Mhz
(133*2/166*2)

128K

256K

.13

37.2/37.6 (higher speeds)

Socket A

1.5-1.65v

Compare

AMD Athlon Barton

333/400Mhz

128K

512K

.13

54.3 Million

Socket A

1.65v

Compare

AMD Athlon 64

400Mhz (hyper transport link)

128K

1MB

.13

105.9 Million

754-pin socket

1.5v

Compare

AMD Athlon 64 FX

400Mhz (hyper transport link) 128-bit

128K

1MB

.13

105.9 Million

940-pin socket

1.5v

Compare

Intel P4

400Mhz  (100*4)

20K (12K Instructions 8K Data)

256K On Chip

.18

42
Million

Slot 423 / Socket 478

1.7v

Compare

Intel P4 Northwood

400-800Mhz (100-200*4)

20K (12K Instructions 8K Data)

512K On Chip

.13

55 Million

Socket 478

1.5-1.525

Compare

Intel P4 Prescott

800Mhz

28K (12K Instructions 16K Data)

1MB On Chip

.09

100 Million

Socket 478

1.2 ~

Compare

Intel PIII

100/133Mhz

32K (16K Instructions 16K Data)

512K off chip for 450-600B, 256K On chip for 500E upwards

.25 (Katmai - original) .18 (CUmine)

9.5M Katmai
28M
CUmine

Slot 1 (original) / Socket 370 (FC-PGA)

2v (Katmai) 1.65v (CUmine)

Compare

Intel Celeron

66Mhz (now 100Mhz at higher speeds)

32K (16K Instructions 16K Data)

128K (on chip)

.18

28
Million

Socket 370 (FC-PGA)

1.5v - 1.65v (CPU speed dependant)

Compare

Celeron Tualatin

100Mhz

32K (16K Instructions 16K Data)

256K On Chip

.13

?

Socket 370 (FC-PGA2)

1.45v

Compare

Celeron P4

400Mhz

20K (12K Instructions 8K Data)

128K (on chip)

.18

42 Million

Socket 478

1.75

Compare

Cyrix III

100/133Mhz

128K (64K Instructions 64K Data)

64K

.18

11.2
Million

Socket 370

1.9v

No Longer for general sale

AMD Duron

200Mhz  (100*2)

128K (64K Instructions 64K Data)

64K

.18

25
Million

Socket A

1.6v

Compare

 If you are looking for a CPU chart to compare the relative speeds of these CPU, there is a great article over at Toms Hardware guide which will compare chips from the original Pentiums upto the Barton Core Athlon XP. CPU Chart

Just looking at these figures doesn't really tell you too much about the speed and capabilities of the processors but it does give us a look at how the Budget chips differ to that off the High end power chips. The main difference is usually the cache. As this is quite expensive to put into CPU's then this is often a factor that decides the price. 

Looking at the L2 cache on some of the chips we see that the Athlon, P4 and PII all have 256K or more of L2 cache. These are the power chips. If we look at the budget chips we see that the Celeron has 128K and the Cyrix III and the Duron both only have 64K. The reason the Celeron has more is the fact that its really a PIII with half the cache disabled because of a fault. 

We can also see that the bus speeds get faster as the chips improve. In most cases the System bus remains at 100Mhz but because of DDR transfers in the case of the Athlon and Quad pumping in the case of the P4, the bus speed between the Cache and the CPU can be 200 or 400Mhz respectively. This can give a major performance boost when using cache intensive utilities. After updating this page you will see that now CPU are coming out with a Base 200Mhz system bus and quad speed pushes this to 800Mhz FSB on the new P4

Micron technology is basically the size of the transistors. The smaller they are the more you can fit on the CPU die. Making them smaller makes the Die size smaller and so less power is required and less heat is produced. However because of the increasing complexity of the CPU's the number of transistors is on the increase which even things out a little.

CPU Instruction sets

The CPU's instruction set is the codes or instructions that the CPU can use to process its data. The more it has the more likely it is to be efficient. However all CPU's don't use the same instructions to process that data. Intel and AMD went there separate ways when adding more instructions to the basic instruction set. First to come out was MMX from Intel. MMX was Multimedia extensions, and it added extra instructions to the original set which was recognised by the original IBM 8086 CPU's.

Below is a table of what instruction sets each of the latest CPU's use. As well as some of there features and clock frequency range.

CPU

Instruction Set(s)

Chip Frequencies

Other Info

AMD Athlon (original)

MMX, 3DNOW

500Mhz - 1Ghz

Used a cartridge instead of a chip. Had DDR functionality between CPU and cache.

AMD Athlon Thunderbird

MMX, Enhanced 3DNOW

650Mhz - 1.4Ghz

AMD went back to the socket after finding ways to increase the speed to these chips. Also uses DDR. Cache ran at full speed unlike the original Athlon.

AMD Athlon XP

MMX,  3DNOW Professional*

1500+ (1.3Ghz) - 2800+ (1.73Ghz)

A new core produced more power whilst making less heat. AMD also started to go off the MHz rating in favour of model numbers, in other words they used an old style PR rating based on the thunderbird.

AMD Athlon64

SSE2
3DNOW! Professional

3200+ (more to be introduced)

The Athlon 64 is the first CPU by AMD to use 64-bit operations. This chip is specifically design to use 64-bit Operating systems and Applications. It also uses a hyper transport link to get access to the memory instead of the normal FSB because of its inboard memory controller allowing it to bypass the chipset on route to the main memeory.

Intel P4

MMX, SSE 2

Socket 423 1.3Ghz- 2Ghz
Socket 478 2Ghz - 2.4Ghz (now higher)

The first CPU to use a 400Mhz system bus for its chip. changed to socket 478 to allow extra grounding pins, allowing it to go beyond 2Ghz.

Intel P4 Northwood

MMX, SSE 2

2Ghz - 3.2Ghz

The Northwood drops the micron size down to .13 and also adds an extra 256K of L2 cache to the previous version of the P4. Also as the chips got faster they got a FSB boost from 400Mhz (100*4) to 533Mhz (133*4)

Intel P4 Prescott

PNI (Prescott New Instructions) Could be called SSE 3

3.2Ghz +

Once again the New Pentium 4 is produced using an even smaller Micron technology. .09 in this instance. And like the Northwood the Prescott adds more cache. An extra 8K of L1 Data cache and an extra 512K of Level 2 Cache taking it a massive 1Mb of On Chip cache. SSE 3 offers an extra 13 instructions from SSE 2 you can find out about them at Geek.com

Intel PIII

MMX, SSE

450Mhz - 1.4Ghz

Went from the slot 1 form factor back to the sockets just like AMD did. This was Intel longest running CPU in terms of broadness of MHz in its chips.

Intel Celeron

MMX, SSE

Celeron PII 266Mhz - 533Mhz
Celeron PIII 533Mhz - 1.2Ghz

The Celeron has gone through many stages from the PII to to the latest P4. The Celeron was always based on the power CPU with the cache cut down and the system bus knocked down as well (except in the case of the P4 Celeron where the system bus was kept at 400Mhz). This was either due to a fault meaning it could not be sold as the higher CPU or by design.

Intel Celeron Pentium 4

MMX, SSE 2

1.7Ghz-1.8Ghz

Celeron Tualatin

MMX SSE

1Ghz - 1.2Ghz

Cyrix III

MMX, 3DNOW

500Mhz - 700Mhz

VIA was quick to drop to the .15 micron process. Unfortunately as with all the VIA CPU's the FPU prevented it from being anything more than a cheap reliable business type machine.

AMD Duron

MMX, 3DNOW

SSE in the Morgan core

Duron Spitfire 600Mhz - 950Mhz
Duron Morgan 1Ghz - 1.5Ghz

The Duron has to be one of the best chips available for all concerned. It was an Athlon with 64K cache. The Morgan core even had the SSE built in like the Athlon XP. The price for these chips is very reasonable and is a fast CPU considering its aimed at the budget market.

* 3DNow Professional was labelled as such by AMD, the actual technology is that identical to SSE and is recognised as such by programs that support it.

The difference's between the CPU's are getting smaller as each company see's the advantages of their rivals technology and tries to either use or emulate it.

 

 

 

 

 

 

 

 

 

How does a CD Burner work?

The CD burner or more commonly known as the CD writer has become a standard part of the PC today. Its rare to see a PC without the capacity to create customized CD's. Its takes only a few minutes these days to back up your work onto CD or create a customized music from CD's you already own. But how does the CD writer actually work? Well firstly we will need to take a closer look at how the standard CD reader works. This will help us understand the burning process a little better.

The construction of a CD

The CD itself is made up of one continuous track about 0.5 microns wide and around 5km in length. This track is a small groove spiralling round and round the CD from the centre to the edge. The materials used to make a CD are at the top we have the label, then a layer of acrylic, a layer of aluminium ad finally a thicker layer of plastic to protect the CD.

When manufacturing a production CD like what you buy in the shops a very powerful laser beam is used to burn into the acrylic this is then followed by adding a layer of aluminium to make the pattern permanent. Then the plastic coating is applied for protection. This is obviously a permanent solution and the technique is no good for home use.

 

 

 

Reading the CD

The process of reading the CD is a simple one although it needs to be very precise. When a CD is burned it leaves a pattern of bumps and troughs. This is read as a digital data stream. Each bump is read as a 0 and a trough or flat area is considered to be a 1. (0's and 1's are the basis for digital data or binary code). The bumps are read by an optical sensor, or more accurately the missing bumps are read by the sensor. What actually happens is a low powered laser is projected at the spinning CD, if there is no bump on the CD the light reflects back to the sensor and a binary digit of 1 is recorded, if the laser hits a bump in the CD then the light is reflected away from the sensor and so a 0 is recorded.

The sensor works in conjunction with the motor to work out how fast the CD is spinning and therefore how many times a second it has to send the laser beam to the CD to accurately work out the digital pattern on the CD's surface.

How does the home CD burner differ?

Its obviously impractical to get such a high powered laser as they use on the production line into the home PC. Another solution had to found in order to get CD writing technology into the home. The idea that they came up with was to actually change what the CD was made of. The technology of the laser reflecting off the disc was still there however this time the aluminium layer was totally flat. Instead of covering the bumps in the track like in the production model. What we have on the common CD-R now is a layer of dye. This dye is translucent and so the laser beam will go straight through it and reflect off the aluminium surface and into the optical sensor. Its the same all the way round the disc, this is how you receive your blank CD-R's.

When writing to a CD-R a higher powered laser is used than that of the standard reading laser. This laser is set at a particular frequency and when it comes into contact with the dye its turns it opaque (to a state where the light can't pass through it. To the optical sensor this is the same as the laser beam bouncing off the bump of the production CD as it simply does not get the signal back. The physical shape of the material on the CDR does not change as you can see from the above diagram, but still the effect is permanent which brings us onto the next point. re-writable storage CD's. Now known as CD-RW's

 

 

So how does a CD-RW work?

If we now move onto CD-RW's we take a big step forward in the CD storage media's technology. The CD-RW is a lot more complex than the standard CDR (some say a little less reliable, although I personally have not had problems with CD-RW's). The problem obviously comes with the fact that the data on the disc can not be permanent. This means the production style burning and the original home style burning can't be used on a CD-RW. Lets have a quick look at the way a CD-RW is put together.

The CD-RW uses phase shift technology, using a specially created phase shift compound. The idea behind this is that with this compound the laser can heat it up to melting point at which point the compound turn opaque and the read laser would not be able to bounce back a signal. this then turns into an amorphous state (no crystalline form). It will cool this way and be opaque until its reheated. In order to get the compound translucent again it needs to be heated again, this time to a lower temperature. At lower temperatures the phase shift compound will cool in a crystalline form and allow the laser beam to pass through and bounce off the aluminium beneath. To get the amorphous state the laser heats the compound to around 600C where as to get the crystalline state it only requires heating to around 200C.

When you write data to a CD-RW it stays permanent until its written over. However unlike your hard disk it uses an erase function first. To erase the section of the disk the write laser simply heats all the area to a crystalline form and then writes whatever data you need onto the now blank piece of disk.

Jargon

Acrylic

A man made plastic material that is noted for its transparency and resistance to chemical change

Amorphous

A material or mineral that has a random form of atoms (non-Crystalline)

Binary Code

A digital coding system that only uses 1's and 0's to represent data.

Binary Digit

Part of the Binary Code. A Binary Digit is either a 0 or a 1

CD-R

A blank CD ready for burning your own data or music at home

CD-RW

A blank CD just like a CDR however it has the ability to be erased and wrote to a number of times

Crystalline

Solid material composed of regularly repeating atoms, ions, or molecules that form defined patterns or lattice structures.

Dielectric

An insulating material

Optical sensor

A sensor that is used to detect the presence of light

Phase Shift Compound

A compound (in this case silver, antimony, tellurium and indium) which can change form (phase) when heated to certain temperatures

Translucent

A material that is not quite transparent but still clear enough to allow light to pass through it

 

 PCI Express

The future of PC's and PC components are always changing. What we are looking at now is a replacement for the PCI and the AGP bus in one go. Remember the old ISA bus? Chances are you do but you don't have a single component or a slot on your motherboard. The components required a better bus than ISA and so PCI was brought in. Then when PCI slots could not provide the performance required for top of the range graphics cards, the AGP bus was created. It has been this AGP bus which has been upgraded time after time to keep up with the demand from graphics cards of today's technology. Once again this technology jumps forward. And now we have PCI express.

PCI express doesn't really have any resemblance to the original PCI bus. Firstly the Original PCI bus was a Parallel data system and the new PCI express is a serial system. Although a serial system sends data behind each other and not all together, in can be clocked at a much higher speed. The PCI express subsystem consists of several PCI "lanes" 1 PCI lane was have a transfer rate of 250Mb/s in one direction and 500Mb/s in full duplex (both directions)

Instead of standard PCI slots you will now have x1PCI Express slots. Each of these slots will have access to an entire lane of the PCI express system. No longer will devices have to share the bandwidth available on the PCI bus. PCI devices on the original PCI bus shared the available bandwidth which was limited to 132MB/s. This shared by all the PCI devices in your system could but a strangle hold on your PCI bandwidth. This new system prevents this from ever being an issue.

The Death of the AGP Bus?

The hype around graphics cards however is slightly different the AGP bus has come to an end. It seems that AGP8x is the last of its genre. To replace this we have a special PCI Express slot. This is simply called a x16PCI Express slot. This slot used exclusively for graphics cards has 16 PCI express channel bundled into one port. The graphics card plugged into this port will have the ability to utilize all 16 of these channels all for itself. This will allow a maximum transfer rate of 4GB/s in a single direction. The Current AGP Bus can handle a maximum transfer rate of 2GB/s.

The main debate about PCI Express at the moment is between Nvidia and ATI. ATI are implementing there cards with a true native PCI express connection. On the other hand Nvidia are implementing there current technology with an AGP Bridge, to convert PCI Express to AGP signals. This bridge chip is called the High Speed Interconnect (HSI) chip. Its a fully reversible chip will allows AGP GPU's to be run on PCI Express interfaces, and could be utilized to allow the use of PCI Express GPU's on an AGP bus.

As this chip is on the graphics card itself there is no wasted bandwidth on the motherboard, you still get the full bandwidth from the memory to the graphics card. To carry this bandwidth all the way through Nvidia has cranked up the speed of the AGP speed on the card itself to AGP16x. You now have no bandwidth loss. ATI claim the loss is involved with latencies when using a bridge chip during the translation. Now for a reality check. The step up from AGP 4x to 8x did very little for the graphics industry in its current state. 3D games today were not fully utilizing the available bandwidth in AGP 4x. The step up to AGP 8x only allowed for improvement in extreme circumstances anyway. This is not to say that this will always be the case. But my stance on this is by the time you need to the full 16 channels of PCI express Nvidia will be using native PCI Express cards in a whole new generation of graphics technology.

Physical Attributes of PCI Express

The size of the of the PCI express slot depends on the amount of lanes it uses. Because of the nature of the data transfer the more lanes being used the more pins that have to be connected to the motherboard. This determines the size of the slot required. It also means that you can't insert a x1 PCI express card into a x16 PCI slot. Other sizes are available such as x8 and x12 PCI express slots, however it is unlikely that these sizes will turn up in mainstream computers. They will be resevered for special servers.

Above is a picture of a x16 PCI Express slot (top) and a x1 PCI Express slot (bottom)

A x1 PCI Express slot can provide a maximum of 25W of power, you will find that this is more than enough for a single PCI device. A x16 PCI Express slot will have support for 75W of power (bear in mind that these days high powered graphics cards often draw extra power direct from the PSU)

 

BIOS Beep Codes

Annoying isn't it? You have built your computer you switch it on and then nothing happens except a few beeps from the PC speaker. Frustration sets in as you try to figure out what is wrong with it. If you didn't already know the computer has already told you the problem. It can't speak of course but it can direct you to the problem. Its all in the beeps. The BIOS can recognise when the problem occurs and sends a signal out to send a certain amount of beeps through the speaker. These beeps then tell you the location of the problem. 

Unfortunately not all the BIOS' use the same codes as each other. Two of the main BIOS manufactures AMI and Award (now Phoenix) have different codes for there errors. 

AMI BIOS

# of Beeps

Error

Description

1

Refresh Failure

The memory refresh Circuitry is faulty

2

Parity Error

Parity error in the Base (1st 64K) of memory

3

64K Base Memory Error

Memory error in the base memory (1st 64K)

4

Timer Not Operational

Timer 1 is not functioning (also caused by error in base memory)

5

Processor Error

CPU error

6

8042 Gate A20 Failure

Unable to switch to protected mode

7

Processor Exception Interrupt error

The CPU on the CPU card generated an interrupt error

8

Display Memory Read/Write Error

Video adapter is missing, incorrectly seated or has faulty memory

9

ROM checksum error

The ROM checksum does not match that of the BIOS

10

Coms Shutdown Register Read/Write

The shutdown register for coms RAM has failed

11

Cache Memory Bad

The cache memory test has failed. Cache memory will be disabled. *** DO NOT enable it ***

With the first 3 beep codes, its well worth re-seating the memory just to make sure that it's in correctly. 8 Beeps is probably the most common in my experience. Can be caused by a badly seated Graphics card. If you have re-seated it then check with another Graphics card in the board.

Always check for loose components before sending the board back as this is the main cause of errors on the POST.

 Award BIOS

Award states that they now only use one beep from there BIOS. This beep is one long beep and then two short beeps. This indicates a graphics card problem. Any other beeps should be treated as a RAM problem first and then the board sent in to be inspected.

The reason that the Award BIOS only uses the beep code for display problems is that it tries to display the error on-screen if at all possible. If the BIOS cannot initiate the display adapter then this causes the BIOS to make the beep code for a display error, which must be corrected before any other errors can be determined. Memory Test fails and hard disk failures etc will all be displayed on screen.

However people do say that they know what some codes mean some codes that I believe can be trusted are some while in operation. Consult the table below.

Beep Code

Error

1 Long 2 Short

Video Card Error - Either re-seat or replace video card

Repeating beeps

Memory Error - Memory is either damaged or badly seated

Repeating Hi/Low Beeps

Damaged or Overheating CPU

Hi Frequency Beeps

Overheating CPU

In the case of a overheating CPU, shut down the computer immediately and let the CPU cool. Then check the CPU heatsink and fan to make sure its working properly and then check that the airflow in the case is adequate. If the problem persists then consider purchasing a new and more powerful fan for your CPU.

IBM BIOS

The IBM BIOS works with Short and Long beeps as well as the Award BIOS. However the IBM one does still have codes to work from.

Beep Code

Error

1 Short Beep

Normal POST, System booted OK

2 Short Beeps

POST Error - Code on Display

No Beep

Power supply or Motherboard error

Continuous Beep

Power supply or Motherboard error

Repeating short beeps

Power supply or Motherboard error

1 short, 1 long beep

System board error

1 long, 2 short beeps

Display adapter error (MDA/CGA)

1 long, 3 short beeps

Display adapter error (EGA/VGA)

3 long beeps

3270 keyboard card

An Introduction to RAID

RAID stands for Redundant Array of Independent Disks. The idea of RAID technology was to use an array of hard disks for either better performance or better security against disk failure. Raid can use 2 or more disks at once to increase data reading and writing speed, It can use 2 or more disks to store the same data so disk failure will not mean that you lose your data, or RAID can be a mixture of both. A RAID Array of disks will appear to an operating system as a single disk as extra storage space is not provided by RAID.

The common RAID functions as mentioned above comes in 3 different levels. These are called RAID 0, RAID 1 and RAID 0+1. This is the terminology you will see when you are buying a motherboard that supports the RAID feature. The real names of these levels are. Stripping, Mirroring and Stripping + Mirroring.

 

RAID 0 Striping

The Diagram to the left shows the basics of the RAID 0 or striping feature. The idea of RAID 0 is to increase performance. When storing information using the striping feature, the data will be split block by block between the two hard disks. Block one will be send to disk one, block two will be sent to disk two, block three will be sent to disk one and block 4 will be sent to disk 2 and so on. This is much faster than a single disk because when reading the data off the disks the twp of them will be working at the same time to retrieve the same file virtually doubling the speed or retrieval and so virtually halving the time of retrieval. As I mentioned this is a performance setup. Should any one of the disks fail the whole array will become corrupt. Most of the files will be split between disks and so will be rendered useless. If you don't have important data on your computer or you have regular backups of what you do need, then a RAID 0 setup would greatly increase your computers disk performance. To get the best out of this system it is wise to use two disks which are the same make and model. If this is not possible then two of the same size and RPM would be useful but not essential. If two disks of different sizes are used in this system then the logical drive will show as the smallest disk. See drive capacities under RAID at the end of this article.

 

 

 

RAID 1 Mirroring

RAID 1 or mirroring gives added security for your data at the cost of storage space. As with striping this setup uses two hard disk drives to produce a single logical drive. In this instance however the total storage space is only the size of one of the disks (the smallest one). This is because with RAID 1 any data that is written to or read from the hard disk is done on the second hard disk exactly the same. If you save a file to your machine, it will will saved on both disks at the same time. This will however affect system performance with two disks needing to be written to and with the data being the same, its no better in terms of performance unlike the stripping method. However there is always advantages. Mirroring, having the same data on both disks has obvious plus points when it comes to data integrity and security against disk failure. If either disk one or two should fail the other disk will take over as the solitary disk providing and storing data like it did before the failure. Again see the drive capacity section at the end of this article to learn about data redundancy and why the logical drive sizes are what they are with each of the three RAID setups.

RAID 0+1 Striping and Mirroring

RAID 0+1 or Striping + Mirroring as you would imagine is a combination of the above two setups. This setup takes the advantages of both the stripping setup and the mirroring setup. You get the increased performance of splitting the data across multiple drives, however each of these striped drives will have a mirror as well for the data backup and security against failure. The obvious drawback here is the cost involved. The minimum amount of hard disk drives used in this configuration is 4. This puts most home users out of the equation as not only do you need to buy 4 hard disks but the PSU has to cope as well.

Disk Capacities Using the RAID Function

These RAID functions give you varying capacities for your hard disk. To illustrate this we will take an example of a user using only 80Gb hard disks. We will take each of RAID levels mentioned to see what drive capacities you would get out of them.

RAID 0 Striping
With RAID 0 and using the 2 80Gb hard disks you would get the full 160Gb of storage space. Although the data is split between the 2 hard disks. There is no data redundancy (duplicate data). This allows for the full storage space to be used.

RAID 1 Mirroring
When using two 80Gb hard disks with the RAID 1 function you would only receive 80Gb of storage space. Because you are using the two drives to contain the same data, the logical drive will appear as a single 80Gb drive.

RAID 0+1 Striping
In this example we would need to use 4 80Gb drives. RAID 0+1 is a combination of the two above and so storage works out as a combination of the two as well. The logical drive will appear as a single drive, this drives capacity will be 160Gb. The 2 striped drives will be included in the logical drives space, but as above the mirrored drives will appear invisible to the user.

 

 

 

 

 

 

 

 

 

 

 

 

 

Serial ATA

Serial ATA is a new way of connecting your storage devices. You will all be used to the way we connect IDE devices to out motherboards. We have a wide 40/80 pin wire plugged into both the motherboard and the storage device. These wires are big and bulky and often get in the way of a lot of other things inside the case. They are also based on Parallel data transmission which sends data in parallel cause interference with other signals. Serial connectors only have a send and receive cause much less problems in that respect.

The serial cable is only a small thin cable as opposed to the bulky IDE cables. This helps with the airflow around the case and helps system cooling. The cables themselves can also be longer than the set limit of 40cm of the IDE cables. This makes it a lot easier to connect those awkward HD's at the top of a tower case.

Lets have a look in more detail at the Serial ATA specification and the impact it will have on future storage devices.

Serial Over Parallel

Why choose serial over the current Parallel data transfer method. Well there are two main reasons for this. Firstly Having data sent in parallel requires a lot of signal being sent at the same time, this causes electrical interference and can have problem, using serial transfer means you only need a send and receive signal.

The second reason is if you are sending data in parallel you have to have a lot more wires. You know that an IDE cable can be a real pain when trying to keep system cooling running smoothly. They also get in the way of other wires running throughout the system.

All the other data transfer standard we use today are based on the serial data transfer system, USB, Ethernet and Firewire are all based on serial data transfer.

Data Transfer Rate

The current specification for serial ATA allows for a max data transfer of 150Mb/s That would be the same as a ATA150 specification for the current HD rating. £00Mb/s and 600Mb/s are being worked on for the near future.

The Cable

The picture below show you the difference in size between the older IDE cable on the left and the new Serial ATA cable

As you can see the Serial ATA cable is a great deal smaller and so is less intrusive in the case. Also the IDE cable is limited to 40 CM in length while the serial ATA cable does not have this restriction. Meaning that you can now connect those awkward devices located at the top of the large tower case.

Hot Plugging

Hot plugging is something that is brand new for these devices, in fact I've never even thought it would be made. However its here, Hot plugging allows you to connect and disconnect serial ATA devices without powering down the system. OS's within the last few years will pick up on the new device as soon as its been plugged into the system.

 

 

 

 

 

 

 

 

 

 

 

 

 

How to Build a PC

If you think you have a basic knowledge about PC's and also you have noticed exactly how much cheaper it is to build your own PC you may want to read this guide for a few pointers about building your PC and selecting the correct equipment. As I mentioned it really is a lot cheaper to build your own PC, not only that but you can build it to exactly the specification that you want. Often you walk into a retailer and they have set machines available to buy. Mainly you will like the machine but wish that you could just change a couple of components. Building your own solves all of that. 

There is always a downside though (there has to be one doesn't they). This rather heavy downside is the fact that you will not be covered by any warranty. I don't mean if your components don't work there is nothing you can do about it. If your component's are faulty you can send them back of course, the problem is that if something fails or burns out etc because of the way you put the machine together you are not covered. You will have to remember this before you decide to build an expensive machine. We recommend that you have at least a basic knowledge of fitting upgraded components for yourself before you go all the way into building your machine. 

I presume seems as you have carried on reading you are prepared to go all the way with this. OK well here we go then. First things first you need to know exactly what you need to build a complete machine. Below is a checklist to show all the vital pieces of a PC and some of the optional ones for your personal taste.

Essential selection

·      CPU 

·      Heatsink and fan

·      RAM (memory)

·      Motherboard

·      Graphics card

·      Monitor

·      Hard Drive

·      Floppy Drive

·      Keyboard

·      Mouse

·      Case

·      IDE cable's (normally with motherboard)

·      Power cable (normally with case)

More than useful

·      Sound card

·      Speakers

·      Modem

·      CD/DVD-Drive

 

Optional

·      DVD decoder

·      CD-RW Drive

·      DVD-RW Drive

·      Printer

·      Scanner

·      Digital camera

·      Network card

·      Joystick (pad)

·      Web Camera

 

You will require the items in the essential collection in order to produce a workable PC. Without one of these components you PC will be non functional in today's world. You may say that you can get along without a mouse as you can use windows with the keyboard and you may only want to work and type. However we believe that a mouse is essential to be able to use the PC as it was intended the same goes for the floppy drive. Not having one doesn't stop the PC from working but its still classed as an essential because of the essential nature that a boot disk provides.

The items in the more than useful category are items classed as "should have's" These pieces of equipment are not needed by certain machines (e.g. workstations) but will limit what your PC is able to do if they do not contain these components. If your computer has anything to do with a home machine you will probably want to include all of these components as you are likely to require them at some point. 

The optional section is exactly that, optional. These components and peripherals can be added to your machine depending on what you are building the machine to do. For example if your specifically want to watch films on your computer then you will fit it with a DVD drive, possibly instead of a CD drive. A network card would be fitted if you already have a computer at home you want to link to or maybe you want to use it as a workstation.

Its always wise to wait until you have all the pieces you need before you start to build your PC, this way you will not forget anything while waiting around for other pieces to arrive. 

If at all possible have a second working machine available to hand. This is because you get to a stage where something doesn't work, this situation is more than common and the best way to solve it is to place the component that you think is faulty into the other machine. If its faulty in both machine the chances are its a duff and you need to send it back. However if it works in the second machine you instantly know that there is something wrong with the way you have set up your machine or the component is incompatible with other hardware.

Choosing your platform

The first step in building your PC is deciding on which platform you want to base your PC around. This will probably depend on your budget. having a budget to work to will be very important as you may well go buying anything you please and end up not affording it. Its best to sit down and work out how much you are able to spend on your project. Then buy the components up to your budget changing things until it fits in with your available money. 

You should now be in a position to choose your platform. If you find you have a large budget and can afford all the latest technology, you are in a good position as you do not need to compromise anything in your machine, the platform you decide on will simply be the one that suites you the most

Today's PC are normally built around one of 4 CPU's from the top two manufacturers. AMD and Intel have commanded the CPU market for years now and constantly bring out new chips to beat the opposition. If you are buying a PC today you will either be looking at :-

If you are on a budget that is not so large, now is the time to decide what the machine is used for the most and build the PC closer to those items than anything else. For example if you play a lot of music but you are not a gamer, you can spend more money on your sound card and will not need such a powerful CPU or Graphics card. 

It is important to know which platform you require before proceeding because everything else will have to be built around it. The platform I specify as your CPU, because this is the main brain of your PC and the motherboard etc will have to be built around this and there type depends on what CPU you choose. Like most stores do, the CPU is specified as the machine its self and the rest are the components of the machine.  

Once you have chosen which platform is right for you, you can begin to have a look at what components you wish to buy. there are many online retailers that will offer you great deals on you PC components,  Once you have all of your components together run down the checklist again to make sure you have everything. Also be sure to confirm all of your components are compatible with each other. This is usually down to the motherboard you have. Every component goes through the motherboard so please don't order until you know that your components are compatible. Again I stress Its best not to start until you have the equipment so you can keep your mind on track. If you have everything you need in terms of components you will need to have few different tools handy Personally I always have the following to hand when building a PC

·      Ratchet Screwdriver more than one screw type will probably be necessary

·      Anti-Static wrist band

·      Pliers

·      One of those fancy things that pick up small screws if you drop them (I'd appreciate the name of this please =) ).

·      Lots of Coffee (for installing windows and Drivers)

The ant-static wrist band is a very good idea because the slightest bit of static electricity could ruin a CPU or some RAM in an instant. You may be required also to pull out steel plates from the case, this is where it would be safer to use pliers than your hands. You will find that you can remove the plates with your hands but the edges of the case are normally sharp and if any injury can be avoided it's always worth it.

 

 

 

Motherboard CPU and RAM

Now you have chosen which platform you are going to base your CPU around we have to start making sure all the components are compatible with each other. The main compatibility issue is the motherboard and CPU. Motherboards are designed with one type of CPU in mind. If you are using an Athlon XP you must have a motherboard which supports this CPU type. Different CPU's have different amounts of pins and are different sizes so they are physically different as well as technologically different. Once you have established that you have the right type of motherboard in mind we can move on.

We will now move on to the building of the PC, you first job will be to check a few things in the motherboard manual. These are important items and should be relatively easy to find in the manual. Your first job will be to find out how you need to set up your motherboard to take your specific CPU. You will either need to change a few jumpers on the motherboard in order to set the voltage, Clock multiplier and FSB (Front Side Bus) speed, or your motherboard could be Jumperless and you will be able to set this in the BIOS after you have successfully booted up your computer. The final option is that your motherboard is fitted with Dip Switches. These are rather like jumpers in that you will have to set the switch on or off in a combination that is correct for your CPU.

Generally speaking you should be able to tell by looking at your board, a board that has jumper setting will have lots of these little removable connectors on the board and will be clearly visible. Jumperless motherboards may only have a few that you should probably not have to touch. Dip switches are usually located at the edges of the motherboard. Your manual will tell you in any case.

It is important that if you do not have a BIOS controlled CPU setup, that you set the jumpers/switches correctly before you decide to mount the motherboard into the case. You will find it quite difficult to change these jumpers/switches once you have other components in the machine. 

Once you have set the correct settings for your CPU you can plug the CPU into the motherboard. Be sure that the CPU is orientated correctly so as not to damage it. If you are using a Slot CPU cartridge it will be wise to attach the fan to the cartridge before you slot it into the motherboard. Socket CPU's will normally have to be seated before you can attach the fan. 

Nobody can stress enough just how important cooling is to a computer and especially the CPU, without adequate cooling your CPU could fry in seconds, it has been known that a Athlon Thunderbird with a heatsink not correctly applied locked a system and fried in 2 Seconds. No room for error there, you have to be absolutely sure that your heatsink/fan combination is applied correctly. It may sound as if I am trying to put you off so far but I'm not, as long as everything is remembered you will be fine. 

Most people recommend a Thermal paste on the bottom of your heatsink in order to provide a better heat transfer. The more heat that can be pumped away from your CPU the better. Computer components produce a lot of heat and if there is one thing that these components don't like it is heat. As you can see that is not the best situation to be in, however a decent heatsink and fan will keep the CPU to an acceptable level. If you want to know more about cooling your CPU we have an article for you right here at PantherProducts. Cooling your CPU will let you know about the different ways to cool your CPU. Also why cool your CPU will show the fundamental reasons behind the need to keep your CPU cool. There are the options of extra fans you can have on your CPU, larger fans, larger heatsinks, Case fans, extractor fans and blowers. All can be used to control the case temperature. Keeping your system cool promotes stability in your entire system. Graphics cards are the next likely item to fail because of heat. try to keep components as far apart as possible so certain area's don't get to hot, if its un-avoidable then again make sure of adequate cooling.

Next thing to do is Fit the RAM, You will see where the RAM goes, if you are not sure consult your motherboard manual. RAM should just slot into place, but make sure it is level and correctly seated. Wrongly seated RAM is a major cause of 1st time boot failures. There will be two clips at either side of the RAM slot. If this is the case, it is a good indication that if these clips snap into place (not literally snap of course =) ) then the RAM will be correctly seated. 

 * It is important to note that mixing memory type in motherboards is a bad idea even if the slots for two types are available. The extra slots are for options for different memory types not for extra RAM. Different memory speed and timings will cause major system slowdown even in the best cases. System instability will also be a problem.

Graphics card / The First Boot

We will now insert the motherboard into the case, mounting it on the proper plastic board risers and the screws provided in your case. Keeping the motherboard away from the case bottom is important as occasionally it shorts the board and causes hard resets, which is far from helpful. The plastic risers should provide enough lift on the board to allow it to be safe enough. Raising the board away from the case also helps when fitting expansion cards to the board as often a low seated board will not allow expansion cards to go all the way into the slot. 

Once the motherboard is correctly in place screw in the board with the screws which should be provided with the motherboard. Now we will want to insert the graphics card. If your card is an AGP card then you will need to make sure that you get it into the AGP port, this is the only one it will fit into but anyway it will likely be the darker brown coloured one normally on the end of a row of slots closest to the middle of the board. If you have a PCI card then it will be any of the white slots. Place the card into the slot and give it a small downward force, this is required to make sure the card is fully seated in the slot as a slightly loose card will cause the graphics card not to function.

After you have fitted your card and secured it in place we can begin setting up the computer ready for out first boot.  the motherboard requires power just like everything else on your system so that will plugging in firstly. The PSU (Power Supply Unit) will provide a connector to your motherboard, you will see where this goes, make sure you get the connector the right way round, there will be a clip on one side to help you get it the correct way. You will also need to fit the LED's and switches at this point because there is one switch that connects the power switch at the front to the motherboard and so to the power supply, without this you will not be able to boot. Your motherboard booklet will have the information about the LED's and exactly which one goes where.

You can then connect your monitor to the back of the graphics card, then give the case and the monitor power by attaching there power leads to the mains socket. After a final check that everything is fine (especially that your fan is connected properly and that it has power from either the PSU or the FAN header on the motherboard) then you can switch on the power and your system will boot into the BIOS POST screen. Following this there should be a series of errors along the lines of missing keyboards hard drives and floppy drives. This is to be expected as you have not connected them yet. If you are at this stage then everything is fine at the moment. If not check all the previous steps and if necessary try a few of the components in other machines to check if they are working correctly. If you get a blank screen, your monitor may not be connected properly, If you get a series of beeps it is probably a badly seated graphics card. You can use our article on BIOS Beep codes to help you decipher what the exact problem is.

When everything is working up to this point then disconnected the power from the case - This is very important when using ATX cases as the power continues to flow into the system while the Power lead is connected. This is because the power commands on ATX cases and motherboards are software controlled and so need some current in the machine. This is to allow for functions such as Wake on LAN and wake on a certain time. It also allows software applications to shut down the computer once they have completed there task.

We can now start to add a few more components to the system. I would start with the IDE and floppy cables. I would insert these before anything else because these are small fiddly connectors and once other components are added to the system it may be difficult to see and fit these parts. Both the IDE and the floppy cables will have a red line down one side of the cable, this is to help you plug the cable in the correct way.  The red line should be connected to pin one of the motherboard connector. Check with your motherboard manual for the orientation of your IDE and floppy connectors.

If you have ATA 66/100/133 and above connectors you will have to make sure that you connect the cables in the correct order, they will be colour coded in order to make sure you get them in the correct position

·      The Blue connector should be connected to the system board

·      The Black connector should be connected to the master device

·      The Grey Connector should be connected to the slave device    

If you have the ATA 33 drives and cables there is no problem as these can be inserted in any order and into any device, remembering that any devices supporting ATA66 or above will only run at ATA 33 speed if using the ATA 33 cables.

 

 

Fitting your drives

After fitting the cables to your motherboard we will need to fit the drives that they go onto, As a caution measure just rest the drives where you would like them and make sure that the cables reach ok and that there is nothing that impedes access to the back of the drives. It may sound silly but some cases have a few problems as regards to access to the drives.

It is up to you whether you fit the cable to the drive before or after you fit the drive in place, it depends what you think will be easier for you. Personally I tend to fit the drive first as long as I have checked the cables will reach. When plugging the IDE cable into the back of the Drive the same rule with the red line applies. Red line to pin one of the drives IDE port. There maybe a little piece of plastic on the connector that prevents you from plugging it in the wrong way but this is not always the case. As a rule pf thumb you will probably find that the red line goes to the end where the power cable is connected (which should also have a red cable in it Red to Red).

* When screwing in the Drives its normally best to use the screws that come with the drive as then you can be sure of not damaging the Drive's insides. Sometimes longer screws can damages working parts when screwed in too far.    

Now Fit the floppy drive in the same way and plug the floppy cable into the back the same way you have with the IDE cable. Then fit the power cables to all the drives you have just installed. These should only fit in one way so there should not be a problem installing these. 

Now is a good time to reboot your computer to see if everything is working OK again, plug the power leads back into the computer and turn it on. the hard disk should start to spin up if this is so you can switch it off again as it seems that everything is OK.

   Tip

If your hard disk starts to make a whirring noise but nothing is happening, then the noise is likely the fan only and your disk is not booting up. This is a classic sign of an IDE cable being plugged in the wrong way. 

If it all seems fine we will need to do some BIOS work to configure the hard disk and CD drive etc, so unplug your Power lead again and fit the keyboard to the appropriate slot, the chances are you will be using a PS/2 keyboard, some of you may use a USB one in which case you will have a hard time setting up as USB is only recognised by the OS. You will need to at least borrow a PS/2 keyboard however you may find that USB to PS/2 converters are shipping as standard with most notable manufacturers. You may be using the older AT style keyboard on AT socket 7 machines, in this case the only round connector it will fit in will be the one. When using PS/2 its a tiny bit more complicated because there will probably be 2 ports one for the mouse and one for the keyboard. It should be labelled as to which is which.

To the right is a picture of two PS/2 ports (left) and two USB ports (right). In my experience the top and the green PS/2 port is for the mouse and the purple bottom port is for the keyboard, just check with the motherboard manual for which is the correct port. Plugging the keyboard in the incorrect port will throw up un-necessary errors when booting up. The USB ports connect devices to your computer that are interchangeable even when booted up. You can get USB keyboards but they are no good for setting up a machine unless you have the PS/2 adapter with them. Once booted up press the appropriate key to get into the BIOS when prompted usually either Delete, Escape or F1.  

Configuring CMOS

Upon entering the BIOS screen the first option should be "standard CMOS setup" this is what you want, it should already be highlighted so press return/enter and you will be taken to the CMOS screen. The CMOS is the place where you will set up the basic data for your computer to start correctly. You can start at the top with the date and time. It may be correct already but if not set this by using the arrow keys to move and Pg up and Pg down to modify the highlighted item. You set the correct time this way as well. 

Below that will be 4 selections that say,

 

TYPE

SIZE

CYLS

HEAD

PRECOMP

LANDZ

SECTOR

MODE

Pri Master

Auto

0

0

0

0

0

0

Auto

Pri Slave

Auto

0

0

0

0

0

0

Auto

Sec Master

Auto

0

0

0

0

0

0

Auto

Sec Slave

Auto

0

0

0

0

0

0

Auto

The first column shows each of the 4 available IDE device's on 2 IDE channels, Primary and secondary. The device which you selected as Primary master (should be your hard disk or fastest hard disk if you have more than one). If you are using Hard disks with jumpers on the back be sure they are set to the correct setting (master/slave) most disks come ready set to master. 

You can set the BIOS as shown above to automatically detect the hard disks each time you boot up so it will detect all the setting for you. However if your not planning on changing your hard disks then its probably a good idea to set this option to "user" this option will scan for the current disk in that position and set the columns for you. If however you should change your hard disk on that IDE channel then you have to set the CMOS up again or you will experience errors. 

With CD drives and DVD drives I always leave mine on Auto. You can set this to CD-ROM then select your own mode - normally mode 4. Again make sure your CD Drive is set the master/slave accordingly. It will either be connected to the slave of the primary channel, on the same cable as the hard disk or have its own cable and so should be set as the master of the secondary channel. As long as you are aware that if you have 2 devices on the same channel one has to be set as master and one as slave. It is always a good idea to set the fastest component on each channel as the master, as both components have a maximum speed of the master component. e.g.. if you have 2 hard disks one ATA-100 and one ATA 66 then the ATA - 100 drive should be the master as it can transfer data faster. Having the ATA - 66 drive as master will mean that the ATA- 100 drive could only transfer data at 66 Mb/s.

Below this is the Floppy drive configuration, Just set this to the type of floppy drive you have, probably 1.44M , 3.5 in. If you have another floppy drive set this in floppy drive B, if not then leave this option as not installed.

There may be a virus protection option in this screen, if so then its best to leave it disabled until your computer has been setup as sometimes it causes errors when setting up windows and you don't want to waste anymore time at that point. When windows has been installed it should be OK to turn this feature back on again. If you are worried about Virus' though I recommend that you get a software virus protector from a reputable company which can scan the boot sector as well as every where else.        

Expansion Slots

Exit the CMOS screen now and then select the Exit and save option to store the data you just entered. Then you can switch off the machine again, remembering to remove the power lead. There are now a few more things to be placed into the machine before we can finally close the case. The expansion cards should now be fitted, This should be one of the easier tasks out of the lot, the idea is that these cards should should just slot into place, you only have to decide which slot they should go into, either an ISA, PCI or AGP.

Above - A PCI slot, Used for most internal peripherals and components, you will have more of these slots than any of the others. This slot is used for sound cards, modems, DVD decoders, TV cards, some graphics cards, network cards etc. 

 

The AGP (Accelerated graphics port) is used solely for graphics cards. This provides a faster Bus speed than the PCI slot which is why its so useful for graphics cards which are demanding more and more every year. The older ISA slots are slowly fading out now but still are used by components such as modems and network cards, its wise to use the PCI if buying anything new because of the risk of ISA disappearing completely.

Just put all your expansion cards in the correct slots and screw them in tightly checking that they are all seated correctly. If you have a sound card you should have an audio cable coming from it. This is a cable to send CD-Audio from your CD-ROM directly to your sound card. There will be a slot on the back of your CD drive where this will fit.

That should be everything now, but don't screw the case on just yet, fit everything together again, including the keyboard and the mouse. connect the power lead again and boot up the computer, this time check that the LED's are working correctly as well, the Green light should be constantly on and the red light should flicker as and when there is hard disk activity. If all is well reset the computer to check that that is working OK and then insert your boot disk (we will assume you are using MS windows from now on - if this is not the case consult your OS manual for instructions on installing). The boot disk will contain the programs we need to set up your hard disk ready for receiving information. This program is FDISK.

FDISK is a hard disk partitioning utility that sets up partitions on your hard disk and also creates the boot sector and logical drives etc. Run FDISK by typing fdisk at the command prompt (A:\). You should enable large hard disk support and then create a primary DOS partition. If you want to have more than one partition set this to the percentage of the drive you want the primary partition to be, if you want 2 partitions half each then set this to 50% and then create a logical drive at 50%. If you want the drive to have one partition then set this to 100%. FDISK will then ask you to restart the computer. Once the computer has re-booted you will need to format the disk. to do this type format c: at the command prompt. You will see a warning that says all data will be lost, this is OK as your hard disk doesn't contain any data at the moment. proceed with the format and enter the data it requires once finished.

You are now ready to install Windows!

Installing Windows

I'm not going to spend too much time explaining how to install windows as its pretty much self explanatory. You will simply need to boot up the computer with the Windows boot disk inserted into the floppy drive, then let the CD-Drivers load and switch to the CD's Driver letter by typing X: at the command prompt where X is the letter of your CD drive. Drive letter depend on how many devices you have on your machine and how many partitions your Hard disks have.

Remember that when you load the windows boot disk a RAM drive is also added so all your drives except for drive C are moved down a letter. Once you have the CD drive at the command prompt type setup and wait for the computer to run a few checks before continuing the windows setup.

At this point windows will explain everything to you so there is no need to go in depth. The only thing I would recommend is that if this is your first install, use the typical setup option because you can add and remove items later on.

While setup is adding files to your computer it is a good idea to get all the drivers for your devices to hand. windows will install a few drivers for you but its still better to have the drivers from the manufacturer as you know they have been thoroughly tested. Drivers you may require are

·      Sound card

·      Graphics card

·      Modem

·      Motherboard

·      Mouse and keyboard (if have extra features)

·      DVD decoder

·      Network card

Items such as your monitor, Hard Disks and CD drives will be assigned drivers by Windows and probably need not be touched. If your monitor is displayed as unknown monitor its best to try plug and play driver by selecting it from standard monitor drivers after the installation of windows.

When windows is ready to reboot the computer it will inform you, after a few more settings it will start asking you for some of the disks, simply install all the drivers needed and shut down the computer. We will now alter just a couple of BIOS settings and then your PC is ready for your own customisation.  

BIOS Settings

Not much of the BIOS should be altered by un-qualified people but a few items need to be changed for customisation purposes. Most of the options we will be changing is in the Standard BIOS features setup. So when booting up the computer press the appropriate key to get into the BIOS, if your BIOS doesn't have a soft menu for CPU setup then the standard BIOS features should be the second option on the list.  

The first Item we will change is the Boot sequence, here you can change the order in which the computer searches for boot devices. Most people will have floppy or (A) as the first boot device and the hard disk (C) as the second then any other devices that are bootable. Because not everyone's computer is the same there will be many different ways to set this up. Some BIOS' let you set 4 boot devices in order others only 2 or 3. 

           Tip

It is not a good idea to set your hard disk that you have windows on as the 1st boot device because if windows has a problem it will keep trying to boot. The best idea is to set the floppy drive as boot device 2 and the hard disk as boot device 2

Set the boot order that you like, if you can boot from the CD-ROM and you have a bootable CD-ROM then you may set this before the hard disk, if you can boot of the SCSI then again set this up as well. Anything that can be booted off should be setup just in case you have to use it. You may also come across an option that says try other boot devices, you may as well set this to yes so you don't have to worry about it later. remember that this option will not help you if you have set the hard disk before your boot devices as it will still boot on the hard disk. If windows is corrupt the PC will still boot into it. 

Next on the list is the Quick Boot option. This basically asks whether or not you want the computer to run a full test on each boot. If you would rather have a quick boot then set this to yes, if you want a full system test then set this to no.

Now look down the list for anything that refers directly to your computer such as PS/2 support, if you have PS/2 devices make sure this is turned on, the same with USB devices. 

You may have to alter a few things in the chipset features but be careful as some things in there if set incorrectly will cause your computer to function in-correctly. You should read your motherboard documentation on what is available to be touched.  

How do Optical Mice Work?

We have all seen the fantastic progression in mouse technology which is the introduction of optical technology and doing away with the mouse ball. You may however be wondering how exactly does the optical mouse work? Well there are variations in the technologies from different manufacturers but the principles are all the same.

The "Eye"

The main component of the optical mouse is the Optical "eye". Microsoft were the first to come up with this technology and named it the Intellieye. What the Intellieye does is scan the surface under mouse. The Intellieye itself is a single LED (Light Emitting Diode) which it bounces the light off the surface. It also has a very tiny camera which takes pictures of the surface. The original mouse by Microsoft scanned the surface 1500 times a second, they have progressed on since then.

The DSP (Digital Signal Processor)

The digital signal processor receives the images that have been taken from the camera and analyses them for differences. It can pick up very fine differences in the pictures and from this it can determine how far the mouse has moved across your desk and at what speed. Coordinates are then sent to the computer which moves the cursor on the screen. The DSP can detect patterns and analyse them at a very high speed. The original Intellimouse explorer from Microsoft had a DSP running at 18 MIPS (Million Instructions per second). This type of speed is needed as if it did not react as quick as you it would cause very bad and jerky movements by the cursor on the screen.

Problematic Surfaces

The camera incorporated in these mice do have a certain amount of limitations however which are always being worked on. The first is the type of surface that you use the mouse on. Surfaces that can cause problems are Glass and Mirrors and some 3-d mouse pads. The reason that the mouse has problems with mirrors is that because a camera is used the image that is sent to the DSP is always a reflection and so rendered useless.

Glass is another problematic surface but for a different reason. Where as a mirror will reflect the image the glass is a near perfect transparent material and so it doesn't have enough imperfections or patterns for the DSP to pick up on. Obviously with glass there would be an image below it to analyse but if your table was made of glass then the surface below it would be well to far away to analyse. If you are using a glass table you will simply need a mouse mat

Duel Sensors

With higher and higher expectations coming from technology the introduction of duel sensors has arrived. Working together on the same surface two sensors can be even more accurate when it comes to analysing patterns and movement. These sensors are positioned at an angle from each other to give two completely different views of the surface you are working on. This technology has been used effectively to allow faster movements across the desktop with your mouse, as well as increased precision for very slow and precise movements like that used in drawing programs or graphic design.

Sensor Size

The size of the sensor or to put it more accurately the size of the area which is scanned can make a difference in the accuracy of the tracked movements. Like it states on the Logitech website. If you are looking at an image through a window, the bigger the window the more detail you can extract from the image. Its the same with the optical mouse sensor. If you are scanning twice the size of the desktop as another mouse then the larger image will produce a greater accuracy of movement as it has more patterns to pick up on and track.

 

 

 

 

 

 

 

 

 

floppy disk

floppy disks

A soft magnetic disk. It is called floppy because it flops if you wave it (at least, the 5¼-inch variety does). Unlike most hard disks, floppy disks (often called floppies or diskettes) are portable, because you can remove them from a disk drive. Disk drives for floppy disks are called floppy drives. Floppy disks are slower to access than hard disks and have less storage capacity, but they are much less expensive. And most importantly, they are portable.

Floppies come in three basic sizes:

·  8-inch: The first floppy disk design, invented by IBM in the late 1960s and used in the early 1970s as first a read-only format and then as a read-write format. The typical desktop/laptop computer does not use the 8-inch floppy disk.

·  5¼-inch: The common size for PCs made before 1987 and the predecessor to the 8-inch floppy disk. This type of floppy is generally capable of storing between 100K and 1.2MB (megabytes) of data. The most common sizes are 360K and 1.2MB.

·  3½-inch: Floppy is something of a misnomer for these disks, as they are encased in a rigid envelope. Despite their small size, microfloppies have a larger storage capacity than their cousins -- from 400K to 1.4MB of data. The most common sizes for PCs are 720K (double-density) and 1.44MB (high-density). Macintoshes support disks of 400K, 800K, and 1.2MB.

 

 

 

 

 

 

 

How Computer Monitors Work

A computer display is a marvelous thing. An unassuming dark gray surface can suddenly transform into an artist's canvas, an engineer's gauges, a writer's page or your very own window to both the real world and a huge range of artificial worlds!

Because we use them daily, many of us have a lot of questions about our displays and may not even realize it. What does "aspect ratio" mean? What is dot pitch? How much power does a display use? What is the difference between CRT and LCD? What does "refresh rate" mean?

By the end of the article, you will be able to understand your current display and also make better decisions when purchasing your next one.

The Basics
Often referred to as a monitor when packaged in a separate case, the display is the most-used output device on a computer. The display provides instant feedback by showing you text and graphic images as you work or play. Most desktop displays use a cathode ray tube (CRT), while portable computing devices such as laptops incorporate liquid crystal display (LCD), light-emitting diode (LED), gas plasma or other image projection technology. Because of their slimmer design and smaller energy consumption, monitors using LCD technologies are beginning to replace the venerable CRT on many desktops.

When purchasing a display, you have a number of decisions to make. These decisions affect how well your display will perform for you, how much it will cost and how much information you will be able to view with it. Your decisions include:

  • Display technology - Currently, the choices are mainly between CRT and LCD technologies.
  • Cable technology - VGA and DVI are the two most common.
  • Viewable area (usually measured diagonally)
  • Aspect ratio and orientation (landscape or portrait)
  • Maximum resolution
  • Dot pitch
  • Refresh rate
  • Color depth
  • Amount of power consumption

In the following sections we will talk about each of these areas so that you can completely understand how your monitor works!

Display Technology Background
Displays have come a long way since the blinking green monitors in text-based computer systems of the 1970s. Just look at the advances made by IBM over the course of a decade:

  • In 1981, IBM introduced the Color Graphics Adapter (CGA), which was capable of rendering four colors, and had a maximum resolution of 320 pixels horizontally by 200 pixels vertically.
  • IBM introduced the Enhanced Graphics Adapter (EGA) display in 1984. EGA allowed up to 16 different colors and increased the resolution to 640x350 pixels, improving the appearance of the display and making it easier to read text.
  • In 1987, IBM introduced the Video Graphics Array (VGA) display system. Most computers today support the VGA standard and many VGA monitors are still in use.
  • IBM introduced the Extended Graphics Array (XGA) display in 1990, offering 800x600 pixel resolution in true color (16.8 million colors) and 1,024x768 resolution in 65,536 colors.

Multi-scanning Monitors

If you have been around computers for more than a decade, then you probably remember when NEC announced the MultiSync monitor. Up to that point, most monitors only understood one frequency, which meant that the monitor operated at a single fixed resolution and refresh rate. You had to match your monitor with a graphics adapter that provided that exact signal or it wouldn't work.

The introduction of NEC MultiSync technology started a trend towards multi-scanning monitors. This technology allows a monitor to understand any frequency sent to it within a certain bandwidth. The benefit of a multi-scanning monitor is that you can change resolutions and refresh rates without having to purchase and install a new graphics adapter or monitor each time. Because of the obvious advantage of this approach, nearly every monitor you buy today is a multi-scanning monitor.

Most displays sold today support the Ultra Extended Graphics Array (UXGA) standard. In the next section, you'll learn about UXGA.

Display Technology: UXGA
UXGA can support a palette of up to 16.8 million colors and resolutions of up to 1600x1200 pixels, depending on the video memory of the graphics card in your computer. The maximum resolution normally depends on the number of colors displayed. For example, your card might require that you choose between 16.8 million colors at 800x600, or 65,536 colors at 1600x1200.

A typical UXGA adapter takes the digital data sent by application programs, stores it in video random access memory (VRAM) or some equivalent, and uses a digital-to-analog converter (DAC) to convert it to analog data for the display scanning mechanism. In the following section, we'll discuss what happens once the analog data is ready for transmission.

Display TEchnologies: VGA
Once the display information is in analog form, it is sent to the monitor through a VGA cable. See the diagram below:

 

1: Red out

6: Red return (ground)

11: Monitor ID 0 in

2: Green out

7: Green return (ground)

12: Monitor ID 1 in
or data from display

3: Blue out

8: Blue return (ground)

13: Horizontal Sync out

4: Unused

9:

14: Vertical Sync 5: Ground

10: Sync return (ground)

15: Monitor ID 3 in
or data clock

 

You can see that a VGA connector like this has three separate lines for the red, green and blue color signals, and two lines for horizontal and vertical sync signals. In a normal television, all of these signals are combined into a single composite video signal. The separation of the signals is one reason why a computer monitor can have so many more pixels than a TV set.

Since today's VGA adapters do not fully support the use of digital monitors, a new standard, Digital Video Interface (DVI) has been designed for this purpose.

Display Technology: DVI
Because VGA technology requires that the signal be converted from digital to analog for transmission to the monitor, a certain amount of degradation occurs. DVI keeps data in digital form from the computer to the monitor, virtually eliminating signal loss.

The DVI specification is based on Silicon Image's Transition Minimized Differential Signaling (TMDS) and provides a high-speed digital interface. TMDS takes the signal from the graphics adapter, determines the resolution and refresh rate that the monitor is using and spreads the signal out over the available bandwidth to optimize the data transfer from computer to monitor. DVI is technology-independent. Essentially, this means that DVI is going to perform properly with any display and graphics card that is DVI compliant. If you buy a DVI monitor, make sure that you have a video adapter card that can connect to it.

Viewable Area
Two measures describe the size of your display: the aspect ratio and the screen size. Most computer displays, like most televisions, have an aspect ratio of 4:3 right now. This means that the ratio of the width of the display screen to the height is 4 to 3. The other aspect ratio in common use is 16:9. Used in cinematic film, 16:9 was not adopted when the television was first developed, but has always been common in the manufacture of alternative display technologies such as LCD. With widescreen DVD movies steadily increasing in popularity, most TV manufacturers now offer 16:9 displays.

The display includes a projection surface, commonly referred to as the screen. Screen sizes are normally measured in inches from one corner to the corner diagonally across from it. This diagonal measuring system actually came about because the early television manufacturers wanted to make the screen size of their TVs sound more impressive. Because the listed size is measured from the inside beveled edges of the display casing, make sure you ask what the viewable screen size is. This will usually be somewhat less than the stated screen size.

Popular screen sizes are 15, 17, 19 and 21 inches. Notebook screen sizes are usually somewhat smaller, typically ranging from 12 to 15 inches. Obviously, the size of the display will directly affect resolution. The same pixel resolution will be sharper on a smaller monitor and fuzzier on a larger monitor because the same number of pixels is being spread out over a larger number of inches. An image on a 21-inch monitor with a 640x480 resolution will not appear nearly as sharp as it would on a 15-inch display at 640x480.

Maximum Resolution and Dot Pitch
Resolution refers to the number of individual dots of color, known as pixels, contained on a display. Resolution is typically expressed by identifying the number of pixels on the horizontal axis (rows) and the number on the vertical axis (columns), such as 640x480. The monitor's viewable area (discussed in the previous section), refresh rate and dot pitch all directly affect the maximum resolution a monitor can display.

Dot Pitch
Briefly, the dot pitch is the measure of how much space there is between a display's pixels. When considering dot pitch, remember that smaller is better. Packing the pixels closer together is fundamental to achieving higher resolutions.

A display normally can support resolutions that match the physical dot (pixel) size as well as several lesser resolutions. For example, a display with a physical grid of 1280 rows by 1024 columns can obviously support a maximum resolution of 1280x1024 pixels. It usually also supports lower resolutions such as 1024x768, 800x600, and 640x480.

See What does .28 dot pitch mean? for details on dot pitch.

Refresh Rate
In monitors based on CRT technology, the refresh rate is the number of times that the image on the display is drawn each second. If your CRT monitor has a refresh rate of 72 Hertz (Hz), then it cycles through all the pixels from top to bottom 72 times a second. Refresh rates are very important because they control flicker, and you want the refresh rate as high as possible. Too few cycles per second and you will notice a flickering, which can lead to headaches and eye strain.

Televisions have a lower refresh rate than most computer monitors. To help adjust for the lower rate, they use a method called interlacing. This means that the electron gun in the television's CRT will scan through all the odd rows from top to bottom, then start again with the even rows. The phosphors hold the light long enough that your eyes are tricked into thinking that all the lines are being drawn together.

Because your monitor's refresh rate depends on the number of rows it has to scan, it limits the maximum possible resolution. A lot of monitors support multiple refresh rates, usually dependent on the resolution you have chosen. Keep in mind that there is a tradeoff between flicker and resolution, and then pick what works best for you.

Color Depth
The combination of the display modes supported by your graphics adapter and the color capability of your monitor determine how many colors can be displayed. For example, a display that can operate in SuperVGA (SVGA) mode can display up to 16,777,216 (usually rounded to 16.8 million) colors because it can process a 24-bit-long description of a pixel. The number of bits used to describe a pixel is known as its bit depth.

With a 24-bit bit depth, 8 bits are dedicated to each of the three additive primary colors -- red, green and blue. This bit depth is also called true color because it can produce the 10,000,000 colors discernible to the human eye, while a 16-bit display is only capable of producing 65,536 colors. Displays jumped from 16-bit color to 24-bit color because working in 8-bit increments makes things a whole lot easier for developers and programmers.

Simply put, color bit depth refers to the number of bits used to describe the color of a single pixel. The bit depth determines the number of colors that can be displayed at one time. Take a look at the following chart to see the number of colors different bit depths can produce:

Bit-Depth

Number of Colors

1

2
(monochrome)

2

4
(CGA)

4

16
(EGA)

8

256
(VGA)

16

65,536
(High Color, XGA)

24

16,777,216
(True Color, SVGA)

32

16,777,216
(True Color + Alpha Channel)

You will notice that the last entry in the chart is for 32 bits. This is a special graphics mode used by digital video, animation and video games to achieve certain effects. Essentially, 24 bits are used for color and the other 8 bits are used as a separate layer for representing levels of translucency in an object or image.

Nearly every monitor sold today can handle 24-bit color using a standard VGA connector, as discussed previously.

Power Consumption
Power consumption varies greatly with different technologies. CRTs are somewhat power-hungry, at about 110 watts for a typical display, especially when compared to LCDs, which average between 30 and 40 watts.

In a typical home computer setup with a CRT-based display, the monitor accounts for over 80 percent of the electricity used! Because most users don't interact with the computer much of the time it is on, the U.S. government initiated the Energy Star program in 1992. Energy Star-compliant equipment monitors user activity and suspends non-critical processes, such as maintaining a visual display, until you move the mouse or tap the keyboard. According to the EPA, if you use a computer system that is Energy Star compliant, it could save you approximately $400 a year on your electric bill! Similarly, because of the difference in power usage, an LCD monitor might cost more upfront but end up saving you money in the long run.

Monitor Trends: Flat Panels
CRT technology is still the most prevalent system in desktop displays. Because standard CRT technology requires a certain distance between the beam projection device and the screen, monitors employing this type of display technology tend to be very bulky. Other technologies make it possible to have much thinner displays, commonly known as flat-panel displays.


Photo courtesy Sony
Sony flat-panel display

Liquid crystal display (LCD) technology works by blocking light rather than creating it, while light-emitting diode (LED) and gas plasma work by lighting up display screen positions based on the voltages at different grid intersections. LCDs require far less energy than LED and gas plasma technologies and are currently the primary technology for notebook and other mobile computers. As flat-panel displays continue to grow in screen size and improve in resolution and affordability, they will gradually replace CRT-based displays.

For more information on computer monitors and related topics, check out the links on the next page.

 

 

 

 

 

 

 

 

 

 

 

How Computer Keyboards Work

 

 

 

The part of the computer that we come into most contact with is probably the piece that we think about the least. But the keyboard is an amazing piece of technology. For instance, did you know that the keyboard on a typical computer system is actually a computer itself?


Your basic Windows keyboard

At its essence, a keyboard is a series of switches connected to a microprocessor that monitors the state of each switch and initiates a specific response to a change in that state. you will learn more about this switching action, and about the different types of keyboards, how they connect and talk to your computer, and what the components of a keyboard are.

Types of Keyboards
Keyboards have changed very little in layout since their introduction. In fact, the most common change has simply been the natural evolution of adding more keys that provide additional functionality.

The most common keyboards are:

  • 101-key Enhanced keyboard
  • 104-key Windows keyboard
  • 82-key Apple standard keyboard
  • 108-key Apple Extended keyboard

Portable computers such as laptops quite often have custom keyboards that have slightly different key arrangements than a standard keyboard. Also, many system manufacturers add specialty buttons to the standard layout. A typical keyboard has four basic types of keys:

  • Typing keys
  • Numeric keypad
  • Function keys
  • Control keys

The typing keys are the section of the keyboard that contain the letter keys, generally laid out in the same style that was common for typewriters. This layout, known as QWERTY for the first six letters in the layout, was originally designed to slow down fast typists by making the arrangement of the keys somewhat awkward! The reason that typewriter manufacturers did this was because the mechanical arms that imprinted each character on the paper could jam together if the keys were pressed too rapidly. Because it has been long established as a standard, and people have become accustomed to the QWERTY configuration, manufacturers developed keyboards for computers using the same layout, even though jamming is no longer an issue. Critics of the QWERTY layout have adopted another layout, Dvorak, that places the most commonly used letters in the most convenient arrangement.


An Apple Extended keyboard.

The numeric keypad is a part of the natural evolution mentioned previously. As the use of computers in business environments increased, so did the need for speedy data entry. Since a large part of the data was numbers, a set of 17 keys was added to the keyboard. These keys are laid out in the same configuration used by most adding machines and calculators, to facilitate the transition to computer for clerks accustomed to these other machines.

In 1986, IBM extended the basic keyboard with the addition of function and control keys. The function keys, arranged in a line across the top of the keyboard, could be assigned specific commands by the current application or the operating system. Control keys provided cursor and screen control. Four keys arranged in an inverted T formation between the typing keys and numeric keypad allow the user to move the cursor on the display in small increments. The control keys allow the user to make large jumps in most applications. Common control keys include:

  • Home
  • End
  • Insert
  • Delete
  • Page Up
  • Page Down
  • Control (Ctrl)
  • Alternate (Alt)
  • Escape (Esc)

The Windows keyboard adds some extra control keys: two Windows or Start keys, and an Application key. The Apple keyboards are specific to Apple Mac systems.

 

 

 

Inside the Keyboard
The processor in a keyboard has to understand several things that are important to the utility of the keyboard, such as:

  • Position of the key in the key matrix.
  • The amount of bounce and how to filter it.
  • The speed at which to transmit the typematics.


The microprocessor and controller circuitry of a keyboard.

The key matrix is the grid of circuits underneath the keys. In all keyboards except for capacitive ones, each circuit is broken at the point below a specific key. Pressing the key bridges the gap in the circuit, allowing a tiny amount of current to flow through. The processor monitors the key matrix for signs of continuity at any point on the grid. When it finds a circuit that is closed, it compares the location of that circuit on the key matrix to the character map in its ROM. The character map is basically a comparison chart for the processor that tells it what the key at x,y coordinates in the key matrix represents. If more than one key is pressed at the same time, the processor checks to see if that combination of keys has a designation in the character map. For example, pressing the a key by itself would result in a small letter "a" being sent to the computer. If you press and hold down the Shift key while pressing the a key, the processor compares that combination with the character map and produces a capital letter "A."


A look at the key matrix.

The character map in the keyboard can be superseded by a different character map provided by the computer. This is done quite often in languages whose characters do not have English equivalents. Also, there are utilities for changing the character map from the traditional QWERTY to DVORAK or another custom version.

Keyboards rely on switches that cause a change in the current flowing through the circuits in the keyboard. When the key presses the keyswitch against the circuit, there is usually a small amount of vibration between the surfaces, known as bounce. The processor in a keyboard recognizes that this very rapid switching on and off is not caused by you pressing the key repeatedly. Therefore, it filters all of the tiny fluctuations out of the signal and treats it as a single keypress.

If you continue to hold down a key, the processor determines that you wish to send that character repeatedly to the computer. This is known as typematics. In this process, the delay between each instance of a character can normally be set in software, typically ranging from 30 characters per second (cps) to as few as two cps.

Keyboard Technologies
Keyboards use a variety of switch technologies. It is interesting to note that we generally like to have some audible and tactile response to our typing on a keyboard. We want to hear the keys "click" as we type, and we want the keys to feel firm and spring back quickly as we press them. Let's take a look at these different technologies:

  • Rubber dome mechanical
  • Capacitive non-mechanical
  • Metal contact mechanical
  • Membrane mechanical
  • Foam element mechanical


This keyboard uses rubber dome switches.

Probably the most popular switch technology in use today is rubber dome. In these keyboards, each key sits over a small, flexible rubber dome with a hard carbon center. When the key is pressed, a plunger on the bottom of the key pushes down against the dome. This causes the carbon center to push down also, until it presses against a hard flat surface beneath the key matrix. As long as the key is held, the carbon center completes the circuit for that portion of the matrix. When the key is released, the rubber dome springs back to its original shape, forcing the key back up to its at-rest position.

Rubber dome switch keyboards are inexpensive, have pretty good tactile response and are fairly resistant to spills and corrosion because of the rubber layer covering the key matrix. Membrane switches are very similar in operation to rubber dome keyboards. A membrane keyboard does not have separate keys though. Instead, it has a single rubber sheet with bulges for each key. You have seen membrane switches on many devices designed for heavy industrial use or extreme conditions. Because they offer almost no tactile response and can be somewhat difficult to manipulate, these keyboards are seldom found on normal computer systems.

Capacitive switches are considered to be non-mechanical because they do not simply complete a circuit like the other keyboard technologies. Instead, current is constantly flowing through all parts of the key matrix. Each key is spring-loaded, and has a tiny plate attached to the bottom of the plunger. When a key is pressed, this plate is brought very close to another plate just below it. As the two plates are brought closer together, it affects the amount of current flowing through the matrix at that point. The processor detects the change and interprets it as a keypress for that location. Capacitive switch keyboards are expensive, but do not suffer from corrosion and have a longer life than any other keyboard. Also, they do not have problems with bounce since the two surfaces never come into actual contact.

Metal contact and foam element keyboards are not as common as they used to be. Metal contact switches simply have a spring-loaded key with a strip of metal on the bottom of the plunger. When the key is pressed, the metal strip connects the two parts of the circuit. The foam element switch is basically the same design but with a small piece of spongy foam between the bottom of the plunger and the metal strip, providing for a better tactile response. Both technologies have good tactile response, make satisfyingly audible "clicks" and are inexpensive to produce. The problem is that the contacts tend to wear out or corrode faster than on keyboards that use other technologies. Also, there is no barrier that prevents dust or liquids from coming in direct contact with the circuitry of the key matrix.

From the Keyboard to the Computer
As you type, the processor in the keyboard is analyzing the key matrix and determining what characters to send to the computer. It maintains these characters in a buffer of memory that is usually about 16 bytes large. It then sends the data in a stream to the computer via some type of connection.

A PS/2 type keyboard connector.

The most common keyboard connectors are:

  • 5-pin DIN (Deustche Industrie Norm) connector
  • 6-pin IBM PS/2 mini-DIN connector
  • 4-pin USB (Universal Serial Bus) connector
  • internal connector (for laptops)

Normal DIN connectors are rarely used anymore. Most computers use the mini-DIN PS/2 connector; but an increasing number of new systems are dropping the PS/2 connectors in favor of USB. No matter which type of connector is used, two principal elements are sent through the connecting cable. The first is power for the keyboard. Keyboards require a small amount of power, typically about 5 volts, in order to function. The cable also carries the data from the keyboard to the computer.

The other end of the cable connects to a port that is monitored by the computer's keyboard controller. This is an integrated circuit (IC) whose job is to process all of the data that comes from the keyboard and forward it to the operating system. When the operating system is notified that there is data from the keyboard, a number of things can happen:

  • It checks to see if the keyboard data is a system level command. A good example of this is Ctrl-Alt-Delete on a Windows computer, which initiates a reboot.
  • The operating system then passes the keyboard data on to the current application.
  • The current application understands the keyboard data as an application-level command. An example of this would be Alt - f, which opens the File menu in a Windows application.
  • The current application is able to accept keyboard data as content for the application (anything from typing a document to entering a URL to performing a calculation), or
  • The current application does not accept keyboard data and therefore ignores the information.

Once the keyboard data is identified as either system-specific or application-specific, it is processed accordingly. The really amazing thing is how quickly all of this happens. As I type this article, there is no perceptible time lapse between my fingers pressing the keys and the characters appearing on my monitor. When you think about everything the computer is doing to make each single character appear, it is simply incredible!  

 

 

 

 

 

 

 

 

 

 

How Computer Mice Work

 

 

 


Mice come in all shapes and sizes. This is an older two-button mouse.

Mice first broke onto the public stage with the introduction of the Apple Macintosh in 1984, and since then they have helped to completely redefine the way we use computers.

Every day of your computing life, you reach out for your mouse whenever you want to move your cursor or activate something. Your mouse senses your motion and your clicks and sends them to the computer so it can respond appropriately.

In this edition of HowStuffWorks, we'll take the cover off of this important part of the human-machine interface and see exactly what makes it tick!

Evolution
It is amazing how simple and effective a mouse is, and it is also amazing how long it took mice to become a part of everyday life. Given that people naturally point at things -- usually before they speak -- it is surprising that it took so long for a good pointing device to develop. Although originally conceived in the 1960s, it took quite some time for mice to become mainstream.

In the beginning there was no need to point because computers used crude interfaces like teletype machines or punch cards for data entry. The early text terminals did nothing more than emulate a teletype (using the screen to replace paper), so it was many years (well into the 1960s and early 1970s) before arrow keys were found on most terminals. Full screen editors were the first things to take real advantage of the cursor keys, and they offered humans the first crude way to point.

Light pens were used on a variety of machines as a pointing device for many years, and graphics tablets, joy sticks and various other devices were also popular in the 1970s. None of these really took off as the pointing device of choice, however.

When the mouse hit the scene attached to the Mac, it was an immediate success. There is something about it that is completely natural. Compared to a graphics tablet, mice are extremely inexpensive and they take up very little desk space. In the PC world, mice took longer to gain ground, mainly because of a lack of support in the operating system. Once Windows 3.1 made Graphical User Interfaces (GUIs) a standard, the mouse became the PC-human interface of choice very quickly.

Inside a Mouse
The main goal of any mouse is to translate the motion of your hand into signals that the computer can use. Almost all mice today do the translation using five components:


The guts of a mouse

  1. A ball inside the mouse touches the desktop and rolls when the mouse moves.


The underside of the mouse's logic board: The exposed portion of the ball touches the desktop.

  1. Two rollers inside the mouse touch the ball. One of the rollers is oriented so that it detects motion in the X direction, and the other is oriented 90 degrees to the first roller so it detects motion in the Y direction. When the ball rotates, one or both of these rollers rotate as well. The following image shows the two white rollers on this mouse:


The rollers that touch the ball and detect X and Y motion

  1. The rollers each connect to a shaft, and the shaft spins a disk with holes in it. When a roller rolls, its shaft and disk spin. The following image shows the disk:


A typical optical encoding disk: This disk has 36 holes around its outer edge.

  1. On either side of the disk there is an infrared LED and an infrared sensor. The holes in the disk break the beam of light coming from the LED so that the infrared sensor sees pulses of light. The rate of the pulsing is directly related to the speed of the mouse and the distance it travels.


A close-up of one of the optical encoders that track mouse motion: There is an infrared LED (clear) on one side of the disk and an infrared sensor (red) on the other.

  1. An on-board processor chip reads the pulses from the infrared sensors and turns them into binary data that the computer can understand. The chip sends the binary data to the computer through the mouse's cord.


The logic section of a mouse is dominated by an encoder chip, a small processor that reads the pulses coming from the infrared sensors and turns them into bytes sent to the computer. You can also see the two buttons that detect clicks (on either side of the wire connector).

In this optomechanical arrangement, the disk moves mechanically, and an optical system counts pulses of light. On this mouse, the ball is 21 mm in diameter. The roller is 7 mm in diameter. The encoding disk has 36 holes. So if the mouse moves 25.4 mm (1 inch), the encoder chip detects 41 pulses of light.

You might have noticed that each encoder disk has two infrared LEDs and two infrared sensors, one on each side of the disk (so there are four LED/sensor pairs inside a mouse). This arrangement allows the processor to detect the disk's direction of rotation. There is a piece of plastic with a small, precisely located hole that sits between the encoder disk and each infrared sensor. It is visible in this photo:


A close-up of one of the optical encoders that track mouse motion: Note the piece of plastic between the infrared sensor (red) and the encoding disk.

This piece of plastic provides a window through which the infrared sensor can "see." The window on one side of the disk is located slightly higher than it is on the other -- one-half the height of one of the holes in the encoder disk, to be exact. That difference causes the two infrared sensors to see pulses of light at slightly different times. There are times when one of the sensors will see a pulse of light when the other does not, and vice versa. This page offers a nice explanation of how direction is determined.

The Optical Mouse
With advances it mouse technology, it appears that the venerable wheeled mouse is in danger of extinction. The now-preferred device for pointing and clicking is the optical mouse.


This Microsoft Intellimouse uses optical technology.

Developed by Agilent Technologies and introduced to the world in late 1999, the optical mouse actually uses a tiny camera to take 1,500 pictures every second.

Able to work on almost any surface, the mouse has a small, red light-emitting diode (LED) that bounces light off that surface onto a complimentary metal-oxide semiconductor (CMOS) sensor. The CMOS sensor sends each image to a digital signal processor (DSP) for analysis. The DSP, operating at 18 MIPS (million instructions per second), is able to detect patterns in the images and see how those patterns have moved since the previous image. Based on the change in patterns over a sequence of images, the DSP determines how far the mouse has moved and sends the corresponding coordinates to the computer. The computer moves the cursor on the screen based on the coordinates received from the mouse. This happens hundreds of times each second, making the cursor appear to move very smoothly.


In this photo, you can see the LED on the bottom of the mouse.

Optical mice have several benefits over wheeled mice:

  • No moving parts means less wear and a lower chance of failure.
  • There's no way for dirt to get inside the mouse and interfere with the tracking sensors.
  • Increased tracking resolution means smoother response.
  • They don't require a special surface, such as a mouse pad.


Apple has transformed its optical mouse into a modern work of art.

Although LED-based optical mice are fairly recent, another type of optical mouse has been around for over a decade. The original optical-mouse technology bounced a focused beam of light off a highly-reflective mouse pad onto a sensor. The mouse pad had a grid of dark lines. Each time the mouse was moved, the beam of light was interrupted by the grid. Whenever the light was interrupted, the sensor sent a signal to the computer and the cursor moved a corresponding amount.

This kind of optical mouse was difficult to use, requiring that you hold it at precisely the right angle to ensure that the light beam and sensor aligned. Also, damage to or loss of the mouse pad rendered the mouse useless until a replacement pad was purchased. Today's LED-based optical mice are far more user-friendly and reliable.

Data Interface
Most mice in use today use the standard PS/2 type connector, as shown here:


A typical PS/2 connector: Assume that pin 1 is located just to the left of the black alignment pin, and the others are numbered clockwise from there.

These pins have the following functions (refer to the above photo for pin numbering):

  1. Unused
  2. +5 volts (to power the chip and LEDs)
  3. Unused
  4. Clock
  5. Ground
  6. Data

Whenever the mouse moves or the user clicks a button, the mouse sends 3 bytes of data to the computer. The first byte's 8 bits contain:

  1. Left button state (0 = off, 1 = on)
  2. Right button state (0 = off, 1 = on)
  3. 0
  4. 1
  5. X direction (positive or negative)
  6. Y direction
  7. X overflow (the mouse moved more than 255 pulses in 1/40th of a second)
  8. Y overflow

The next 2 bytes contain the X and Y movement values, respectively. These 2 bytes contain the number of pulses that have been detected in the X and Y direction since the last packet was sent.

The data is sent from the mouse to the computer serially on the data line, with the clock line pulsing to tell the computer where each bit starts and stops. Eleven bits are sent for each byte (1 start bit, 8 data bits, 1 parity bit and 1 stop bit). The PS/2 mouse sends on the order of 1,200 bits per second. That allows it to report mouse position to the computer at a maximum rate of about 40 reports per second. If you are moving the mouse very rapidly, the mouse may travel an inch or more in one-fortieth of a second. This is why there is a byte allocated for X and Y motion in the data protocol.

 

 

 

 

 

 

 

 

 

 

How Microprocessors Work

 

 

 


Photo courtesy International Business Machines Corporation. Unauthorized use not permitted.
CMOS 7S "Copper chip" on a stack of pennies

The computer you are using to read this page uses a microprocessor to do its work. The microprocessor is the heart of any normal computer, whether it is a desktop machine, a server or a laptop. The microprocessor you are using might be a Pentium, a K6, a PowerPC, a Sparc or any of the many other brands and types of microprocessors, but they all do approximately the same thing in approximately the same way.

If you have ever wondered what the microprocessor in your computer is doing, or if you have ever wondered about the differences between types of microprocessors, then read on. In this article, you will learn how fairly simple digital logic techniques allow a computer to do its job, whether its playing a game or spell checking a document!

Microprocessor History


Intel 4004 chip

A microprocessor -- also known as a CPU or central processing unit -- is a complete computation engine that is fabricated on a single chip. The first microprocessor was the Intel 4004, introduced in 1971. The 4004 was not very powerful -- all it could do was add and subtract, and it could only do that 4 bits at a time. But it was amazing that everything was on one chip. Prior to the 4004, engineers built computers either from collections of chips or from discrete components (transistors wired one at a time). The 4004 powered one of the first portable electronic calculators.


Intel 8080

The first microprocessor to make it into a home computer was the Intel 8080, a complete 8-bit computer on one chip, introduced in 1974. The first microprocessor to make a real splash in the market was the Intel 8088, introduced in 1979 and incorporated into the IBM PC (which first appeared around 1982). If you are familiar with the PC market and its history, you know that the PC market moved from the 8088 to the 80286 to the 80386 to the 80486 to the Pentium to the Pentium II to the Pentium III to the Pentium 4. All of these microprocessors are made by Intel and all of them are improvements on the basic design of the 8088. The Pentium 4 can execute any piece of code that ran on the original 8088, but it does it about 5,000 times faster!

 

 

 

 

Microprocessor Progression: Intel
The following table helps you to understand the differences between the different processors that Intel has introduced over the years.

Name

Date

Transistors

Microns

Clock speed

Data width

MIPS

8080

1974

6,000

6

2 MHz

8 bits

0.64

8088

1979

29,000

3

5 MHz

16 bits
8-bit bus

0.33

80286

1982

134,000

1.5

6 MHz

16 bits

1

80386

1985

275,000

1.5

16 MHz

32 bits

5

80486

1989

1,200,000

1

25 MHz

32 bits

20

Pentium

1993

3,100,000

0.8

60 MHz

32 bits
64-bit bus

100

Pentium II

1997

7,500,000

0.35

233 MHz

32 bits
64-bit bus

~300

Pentium III

1999

9,500,000

0.25

450 MHz

32 bits
64-bit bus

~510

Pentium 4

2000

42,000,000

0.18

1.5 GHz

32 bits
64-bit bus

~1,700


Compiled from The Intel Microprocessor Quick Reference Guide and TSCP Benchmark Scores

Information about this table:

What's a Chip?

A chip is also called an integrated circuit. Generally it is a small, thin piece of silicon onto which the transistors making up the microprocessor have been etched. A chip might be as large as an inch on a side and can contain tens of millions of transistors. Simpler processors might consist of a few thousand transistors etched onto a chip just a few millimeters square.

  • The date is the year that the processor was first introduced. Many processors are re-introduced at higher clock speeds for many years after the original release date.
  • Transistors is the number of transistors on the chip. You can see that the number of transistors on a single chip has risen steadily over the years.
  • Microns is the width, in microns, of the smallest wire on the chip. For comparison, a human hair is 100 microns thick. As the feature size on the chip goes down, the number of transistors rises.
  • Clock speed is the maximum rate that the chip can be clocked at. Clock speed will make more sense in the next section.
  • Data Width is the width of the ALU. An 8-bit ALU can add/subtract/multiply/etc. two 8-bit numbers, while a 32-bit ALU can manipulate 32-bit numbers. An 8-bit ALU would have to execute four instructions to add two 32-bit numbers, while a 32-bit ALU can do it in one instruction. In many cases, the external data bus is the same width as the ALU, but not always. The 8088 had a 16-bit ALU and an 8-bit bus, while the modern Pentiums fetch data 64 bits at a time for their 32-bit ALUs.
  • MIPS stands for "millions of instructions per second" and is a rough measure of the performance of a CPU. Modern CPUs can do so many different things that MIPS ratings lose a lot of their meaning, but you can get a general sense of the relative power of the CPUs from this column.

From this table you can see that, in general, there is a relationship between clock speed and MIPS. The maximum clock speed is a function of the manufacturing process and delays within the chip. There is also a relationship between the number of transistors and MIPS. For example, the 8088 clocked at 5 MHz but only executed at 0.33 MIPS (about one instruction per 15 clock cycles). Modern processors can often execute at a rate of two instructions per clock cycle. That improvement is directly related to the number of transistors on the chip and will make more sense in the next section.

Inside a Microprocessor


Photo courtesy Intel Corporation
Intel Pentium 4 processor

To understand how a microprocessor works, it is helpful to look inside and learn about the logic used to create one. In the process you can also learn about assembly language -- the native language of a microprocessor -- and many of the things that engineers can do to boost the speed of a processor.

A microprocessor executes a collection of machine instructions that tell the processor what to do. Based on the instructions, a microprocessor does three basic things:

  • Using its ALU (Arithmetic/Logic Unit), a microprocessor can perform mathematical operations like addition, subtraction, multiplication and division. Modern microprocessors contain complete floating point processors that can perform extremely sophisticated operations on large floating point numbers.
  • A microprocessor can move data from one memory location to another.
  • A microprocessor can make decisions and jump to a new set of instructions based on those decisions.

There may be very sophisticated things that a microprocessor does, but those are its three basic activities. The following diagram shows an extremely simple microprocessor capable of doing those three things:

This is about as simple as a microprocessor gets. This microprocessor has:

  • An address bus (that may be 8, 16 or 32 bits wide) that sends an address to memory
  • A data bus (that may be 8, 16 or 32 bits wide) that can send data to memory or receive data from memory
  • An RD (read) and WR (write) line to tell the memory whether it wants to set or get the addressed location
  • A clock line that lets a clock pulse sequence the processor
  • A reset line that resets the program counter to zero (or whatever) and restarts execution

Let's assume that both the address and data buses are 8 bits wide in this example.

Here are the components of this simple microprocessor:

  • Registers A, B and C are simply latches made out of flip-flops. (See the section on "edge-triggered latches" in How Boolean Logic Works for details.)
  • The address latch is just like registers A, B and C.
  • The program counter is a latch with the extra ability to increment by 1 when told to do so, and also to reset to zero when told to do so.
  • The ALU could be as simple as an 8-bit adder (see the section on adders in How Boolean Logic Works for details), or it might be able to add, subtract, multiply and divide 8-bit values. Let's assume the latter here.
  • The test register is a special latch that can hold values from comparisons performed in the ALU. An ALU can normally compare two numbers and determine if they are equal, if one is greater than the other, etc. The test register can also normally hold a carry bit from the last stage of the adder. It stores these values in flip-flops and then the instruction decoder can use the values to make decisions.
  • There are six boxes marked "3-State" in the diagram. These are tri-state buffers. A tri-state buffer can pass a 1, a 0 or it can essentially disconnect its output (imagine a switch that totally disconnects the output line from the wire that the output is heading toward). A tri-state buffer allows multiple outputs to connect to a wire, but only one of them to actually drive a 1 or a 0 onto the line.
  • The instruction register and instruction decoder are responsible for controlling all of the other components.

Helpful Articles

If you are new to digital logic, you may find the following articles helpful in understanding this section:

·  How Bytes and Bits Work

·  How Boolean Logic Works

·  How Electronic Gates Work

Although they are not shown in this diagram, there would be control lines from the instruction decoder that would:

  • Tell the A register to latch the value currently on the data bus
  • Tell the B register to latch the value currently on the data bus
  • Tell the C register to latch the value currently on the data bus
  • Tell the program counter register to latch the value currently on the data bus
  • Tell the address register to latch the value currently on the data bus
  • Tell the instruction register to latch the value currently on the data bus
  • Tell the program counter to increment
  • Tell the program counter to reset to zero
  • Activate any of the six tri-state buffers (six separate lines)
  • Tell the ALU what operation to perform
  • Tell the test register to latch the ALU's test bits
  • Activate the RD line
  • Activate the WR line

Coming into the instruction decoder are the bits from the test register and the clock line, as well as the bits from the instruction register.

RAM and ROM
The previous section talked about the address and data buses, as well as the RD and WR lines. These buses and lines connect either to RAM or ROM -- generally both. In our sample microprocessor, we have an address bus 8 bits wide and a data bus 8 bits wide. That means that the microprocessor can address (2
8) 256 bytes of memory, and it can read or write 8 bits of the memory at a time. Let's assume that this simple microprocessor has 128 bytes of ROM starting at address 0 and 128 bytes of RAM starting at address 128.


ROM chip

ROM stands for read-only memory. A ROM chip is programmed with a permanent collection of pre-set bytes. The address bus tells the ROM chip which byte to get and place on the data bus. When the RD line changes state, the ROM chip presents the selected byte onto the data bus.


RAM chip

RAM stands for random-access memory. RAM contains bytes of information, and the microprocessor can read or write to those bytes depending on whether the RD or WR line is signaled. One problem with today's RAM chips is that they forget everything once the power goes off. That is why the computer needs ROM.

By the way, nearly all computers contain some amount of ROM (it is possible to create a simple computer that contains no RAM -- many microcontrollers do this by placing a handful of RAM bytes on the processor chip itself -- but generally impossible to create one that contains no ROM). On a PC, the ROM is called the BIOS (Basic Input/Output System). When the microprocessor starts, it begins executing instructions it finds in the BIOS. The BIOS instructions do things like test the hardware in the machine, and then it goes to the hard disk to fetch the boot sector (see How Hard Disks Work for details). This boot sector is another small program, and the BIOS stores it in RAM after reading it off the disk. The microprocessor then begins executing the boot sector's instructions from RAM. The boot sector program will tell the microprocessor to fetch something else from the hard disk into RAM, which the microprocessor then executes, and so on. This is how the microprocessor loads and executes the entire operating system.

Microprocessor Instructions
Even the incredibly simple microprocessor shown in the previous example will have a fairly large set of instructions that it can perform. The collection of instructions is implemented as bit patterns, each one of which has a different meaning when loaded into the instruction register. Humans are not particularly good at remembering bit patterns, so a set of short words are defined to represent the different bit patterns. This collection of words is called the assembly language of the processor. An assembler can translate the words into their bit patterns very easily, and then the output of the assembler is placed in memory for the microprocessor to execute.

Here's the set of assembly language instructions that the designer might create for the simple microprocessor in our example:

  • LOADA mem - Load register A from memory address
  • LOADB mem - Load register B from memory address
  • CONB con - Load a constant value into register B
  • SAVEB mem - Save register B to memory address
  • SAVEC mem - Save register C to memory address
  • ADD - Add A and B and store the result in C
  • SUB - Subtract A and B and store the result in C
  • MUL - Multiply A and B and store the result in C
  • DIV - Divide A and B and store the result in C
  • COM - Compare A and B and store the result in test
  • JUMP addr - Jump to an address
  • JEQ addr - Jump, if equal, to address
  • JNEQ addr - Jump, if not equal, to address
  • JG addr - Jump, if greater than, to address
  • JGE addr - Jump, if greater than or equal, to address
  • JL addr - Jump, if less than, to address
  • JLE addr - Jump, if less than or equal, to address
  • STOP - Stop execution

If you have read How C Programming Works, then you know that this simple piece of C code will calculate the factorial of 5 (where the factorial of 5 = 5! = 5 * 4 * 3 * 2 * 1 = 120):

a=1;
f=1;
while (a <= 5)
{
    f = f * a;
    a = a + 1;
}

At the end of the program's execution, the variable f contains the factorial of 5.

Microprocessor Instructions: Assembly Language
A C compiler translates this C code into assembly language. Assuming that RAM starts at address 128 in this processor, and ROM (which contains the assembly language program) starts at address 0, then for our simple microprocessor the assembly language might look like this:

// Assume a is at address 128
// Assume F is at address 129
0   CONB 1      // a=1;
1   SAVEB 128
2   CONB 1      // f=1;
3   SAVEB 129
4   LOADA 128   // if a > 5 the jump to 17
5   CONB 5
6   COM
7   JG 17
8   LOADA 129   // f=f*a;
9   LOADB 128
10  MUL
11  SAVEC 129
12  LOADA 128   // a=a+1;
13  CONB 1
14  ADD
15  SAVEC 128
16  JUMP 4       // loop back to if
17  STOP

Microprocessor Instructions: ROM
So now the question is, "How do all of these instructions look in ROM?" Each of these assembly language instructions must be represented by a binary number. For the sake of simplicity, let's assume each assembly language instruction is given a unique number, like this:

  • LOADA - 1
  • LOADB - 2
  • CONB - 3
  • SAVEB - 4
  • SAVEC mem - 5
  • ADD - 6
  • SUB - 7
  • MUL - 8
  • DIV - 9
  • COM - 10
  • JUMP addr - 11
  • JEQ addr - 12
  • JNEQ addr - 13
  • JG addr - 14
  • JGE addr - 15
  • JL addr - 16
  • JLE addr - 17
  • STOP - 18

The numbers are known as opcodes. In ROM, our little program would look like this:

// Assume a is at address 128
// Assume F is at address 129
Addr opcode/value
0    3             // CONB 1
1    1
2    4             // SAVEB 128
3    128
4    3             // CONB 1
5    1
6    4             // SAVEB 129
7    129
8    1             // LOADA 128
9    128
10   3             // CONB 5
11   5
12   10            // COM
13   14            // JG 17
14   31
15   1             // LOADA 129
16   129
17   2             // LOADB 128
18   128
19   8             // MUL
20   5             // SAVEC 129
21   129
22   1             // LOADA 128
23   128
24   3             // CONB 1
25   1
26   6             // ADD
27   5             // SAVEC 128
28   128
29   11            // JUMP 4
30   8
31   18            // STOP

You can see that seven lines of C code became 17 lines of assembly language, and that became 31 bytes in ROM.

Microprocessor Instructions: Decoding
The instruction decoder needs to turn each of the opcodes into a set of signals that drive the different components inside the microprocessor. Let's take the ADD instruction as an example and look at what it needs to do:

  1. During the first clock cycle, we need to actually load the instruction. Therefore the instruction decoder needs to:
    • activate the tri-state buffer for the program counter
    • activate the RD line
    • activate the data-in tri-state buffer
    • latch the instruction into the instruction register
  2. During the second clock cycle, the ADD instruction is decoded. It needs to do very little:
    • set the operation of the ALU to addition
    • latch the output of the ALU into the C register
  3. During the third clock cycle, the program counter is incremented (in theory this could be overlapped into the second clock cycle).

Every instruction can be broken down as a set of sequenced operations like these that manipulate the components of the microprocessor in the proper order. Some instructions, like this ADD instruction, might take two or three clock cycles. Others might take five or six clock cycles.

Microprocessor Performance
The number of transistors available has a huge effect on the performance of a processor. As seen earlier, a typical instruction in a processor like an 8088 took 15 clock cycles to execute. Because of the design of the multiplier, it took approximately 80 cycles just to do one 16-bit multiplication on the 8088. With more transistors, much more powerful multipliers capable of single-cycle speeds become possible.

More transistors also allow for a technology called pipelining. In a pipelined architecture, instruction execution overlaps. So even though it might take five clock cycles to execute each instruction, there can be five instructions in various stages of execution simultaneously. That way it looks like one instruction completes every clock cycle.

Many modern processors have multiple instruction decoders, each with its own pipeline. This allows for multiple instruction streams, which means that more than one instruction can complete during each clock cycle. This technique can be quite complex to implement, so it takes lots of transistors.

Microprocessor Trends
The trend in processor design has primarily been toward full 32-bit ALUs with fast floating point processors built in and pipelined execution with multiple instruction streams. The newest thing in processor design is 64-bit ALUs, and people are expected to have these processors in their home PCs in the next decade. There has also been a tendency toward special instructions (like the MMX instructions) that make certain operations particularly efficient, and the addition of hardware virtual memory support and L1 caching on the processor chip. All of these trends push up the transistor count, leading to the multi-million transistor powerhouses available today. These processors can execute about one billion instructions per second!

64-bit Processors
Sixty-four-bit processors have been with us since 1992, and in the 21st century they have started to become mainstream. Both Intel and AMD have introduced 64-bit chips, and the Mac G5 sports a 64-bit processor. Sixty-four-bit processors have 64-bit ALUs, 64-bit registers, 64-bit buses and so on.


Photo courtesy AMD

One reason why the world needs 64-bit processors is because of their enlarged address spaces. Thirty-two-bit chips are often constrained to a maximum of 2 GB or 4 GB of RAM access. That sounds like a lot, given that most home computers currently use only 256 MB to 512 MB of RAM. However, a 4-GB limit can be a severe problem for server machines and machines running large databases. And even home machines will start bumping up against the 2 GB or 4 GB limit pretty soon if current trends continue. A 64-bit chip has none of these constraints because a 64-bit RAM address space is essentially infinite for the foreseeable future -- 2^64 bytes of RAM is something on the order of a quadrillion gigabytes of RAM.

With a 64-bit address bus and wide, high-speed data buses on the motherboard, 64-bit machines also offer faster I/O (input/output) speeds to things like hard disk drives and video cards. These features can greatly increase system performance.

Servers can definitely benefit from 64 bits, but what about normal users? Beyond the RAM solution, it is not clear that a 64-bit chip offers "normal users" any real, tangible benefits at the moment. They can process data (very complex data features lots of real numbers) faster. People doing video editing and people doing photographic editing on very large images benefit from this kind of computing power. High-end games will also benefit, once they are re-coded to take advantage of 64-bit features. But the average user who is reading e-mail, browsing the Web and editing Word documents is not really using the processor in that way. In addition, operating systems like Windows XP have not yet been upgraded to handle 64-bit CPUs. Because of the lack of tangible benefits, it will be 2010 or so before we see 64-bit machines on every desktop.

 

 

 

 

 

 

How IDE Controllers Work

 

 

 

No matter what you do with your computer, storage is an important part of your system. In fact, most personal computers have one or more of the following storage devices:


The hard drive and circuit board combination
that typify IDE devices

Usually, these devices connect to the computer through an Integrated Drive Electronics (IDE) interface. Essentially, an IDE interface is a standard way for a storage device to connect to a computer. IDE is actually not the true technical name for the interface standard. The original name, AT Attachment (ATA), signified that the interface was initially developed for the IBM AT computer. In this article, you will learn about the evolution of IDE/ATA, what the pinouts are and exactly what "slave" and "master" mean in IDE.

IDE Evolution
IDE was created as a way to standardize the use of hard drives in computers. The basic concept behind IDE is that the hard drive and the controller should be combined. The controller is a small circuit board with chips that provide guidance as to exactly how the hard drive stores and accesses data. Most controllers also include some memory that acts as a buffer to enhance hard drive performance.

Before IDE, controllers and hard drives were separate and often proprietary. In other words, a controller from one manufacturer might not work with a hard drive from another manufacturer. The distance between the controller and the hard drive could result in poor signal quality and affect performance. Obviously, this caused much frustration for computer users.


The birth of the IDE interface led to combining a controller like this one with a hard drive.

IBM introduced the AT computer in 1984 with a couple of key innovations.

  • The slots in the computer for adding cards used a new version of the Industry Standard Architecture (ISA) bus. The new bus was capable of transmitting information 16 bits at a time, compared to 8 bits on the original ISA bus.
  • IBM also offered a hard drive for the AT that used a new combined drive/controller. A ribbon cable from the drive/controller combination ran to an ISA card to connect to the computer, giving birth to the AT Attachment (ATA) interface.

In 1986, Compaq introduced IDE drives in their Deskpro 386. This drive/controller combination was based on the ATA standard developed by IBM. Before long, other vendors began offering IDE drives. IDE became the term that covered the entire range of integrated drive/controller devices. Since almost all IDE drives are ATA-based, the two terms are used interchangeably.

Controllers, Drives, Host Adapters
Most motherboards come with an IDE interface. This interface is often referred to as an IDE controller, which is incorrect. The interface is actually a host adapter, meaning that it provides a way to connect a complete device to the computer (host). The actual controller is on a circuit board attached to the hard drive. That's the reason it's called Integrated Drive Electronics in the first place!


A close-up of the primary and secondary IDE interfaces
on a motherboard

While the IDE interface was originally developed for connecting hard drives, it has evolved into the universal interface for connecting internal floppy drives, CD-ROM drives and even some tape backup drives. Although it is very popular for internal drives, IDE is rarely used for attaching an external device.

There are several variations of ATA, each one adding to the previous standard and maintaining backward compatibility.

The standards include:

  • ATA-1 - The original specification that Compaq included in the Deskpro 386. It instituted the use of a master/slave configuration. ATA-1 was based on a subset of the standard ISA 96-pin connector that uses either 40 or 44 pin connectors and cables. In the 44-pin version, the extra four pins are used to supply power to a drive that doesn't have a separate power connector. Additionally, ATA-1 provides signal timing for direct memory access (DMA) and programmed input/output (PIO) functions. DMA means that the drive sends information directly to memory, while PIO means that the computer's central processing unit (CPU) manages the information transfer. ATA-1 is more commonly known as IDE.
  • ATA-2 - DMA was fully implemented beginning with the ATA-2 version. Standard DMA transfer rates increased from 4.16 megabytes per second (MBps) in ATA-1 to as many as 16.67 MBps. ATA-2 provides power management, PCMCIA card support and removable device support. ATA-2 is often called EIDE (Enhanced IDE), Fast ATA or Fast ATA-2. The total hard drive size supported increased to 137.4 gigabytes. ATA-2 provided standard translation methods for Cylinder Head Sector (CHS) for hard drives up to 8.4 gigabytes in size. CHS is how the system determines where the data is located on a hard drive. The reason for the big discrepancy between total hard drive size and CHS hard drive support is because of the bit sizes used by the basic input/output system (BIOS) for CHS. CHS has a fixed length for each part of the address. Look at this chart:

Cylinder

10-bit

1024

Head

8-bit

256

Sector

6-bit

63*

·        You will note that the number of sectors is 63 instead of 64. This is because a sector cannot begin with zero. Each sector holds 512 bytes. If you multiply 1,024 x 256 x 63 x 512, you will get 8,455,716,864 bytes or approximately 8.4 gigabytes. Newer BIOS versions increased the bit size for CHS, providing support for the full 137.4 gigabytes.

  • ATA-3 - With the addition of Self-Monitoring Analysis and Reporting Technology (SMART), IDE drives were made more reliable. ATA-3 also adds password protection to access drives, providing a valuable security feature.
  • ATA-4 - Probably the two biggest additions to the standard in this version are Ultra DMA support and the integration of the AT Attachment Program Interface (ATAPI) standard. ATAPI provides a common interface for CD-ROM drives, tape backup drives and other removable storage devices. Before ATA-4, ATAPI was a completely separate standard. With the inclusion of ATAPI, ATA-4 immediately improved the removable media support of ATA. Ultra DMA increased the DMA transfer rate from ATA-2's 16.67 MBps to 33.33 MBps. In addition to the existing cable that uses 40 pins and 40 conductors (wires), this version introduces a cable that has 80 conductors. The other 40 conductors are ground wires interspersed between the standard 40 conductors to improve signal quality. ATA-4 is also known as Ultra DMA, Ultra ATA and Ultra ATA/33.
  • ATA-5 - The major update in ATA-5 is auto detection of which cable is used: the 40-conductor or 80-conductor version. Ultra DMA is increased to 66.67 MB/sec with the use of the 80-conductor cable. ATA-5 is also called Ultra ATA/66.

Cable Key
IDE devices use a ribbon cable to connect to each other. Ribbon cables have all of the wires laid flat next to each other instead of bunched or wrapped together in a bundle. IDE ribbon cables have either 40 or 80 wires. There is a connector at each end of the cable and another one about two-thirds of the distance from the motherboard connector. This cable cannot exceed 18 inches (46 cm) in total length (12 inches from first to second connector, and 6 inches from second to third) to maintain signal integrity. The three connectors are typically different colors and attach to specific items:

  • The blue connector attaches to the motherboard.
  • The black connector attaches to the primary (master) drive.
  • The grey connector attaches to the secondary (slave) drive.

Along one side of the cable is a stripe. This stripe tells you that the wire on that side is attached to Pin 1 of each connector. Wire 20 is not connected to anything. In fact, there is no pin at that position. This position is used to ensure that the cable is attached to the drive in the correct position. Another way that manufacturers make sure the cable is not reversed is by using a cable key. The cable key is a small, plastic square on top of the connector on the ribbon cable that fits into a notch on the connector of the device. This allows the cable to attach in only one position.


The connector on an IDE cable

 

Pin

Description

Pin

Description

1

Reset

23

-IOW

2

Ground

24

Ground

3

Data Bit 7

25

-IOR

4

Data Bit 8

26

Ground

5

Data Bit 6

27

I/O Channel Ready

6

Data Bit 9

28

SPSYNC: Cable Select

7

Data Bit 5

29

-DACK 3

8

Data Bit 10

30

Ground

9

Data Bit 4

31

RQ 14

10

Data Bit 11

32

-IOCS 16

11

Data Bit 3

33

Address Bit 1

12

Data Bit 12

34

-PDIAG

13

Data Bit 2

35

Address Bit 0

14

Data Bit 13

36

Address Bit 2

15

Data Bit 1

37

-CS1FX

16

Data Bit 14

38

-CS3FX

17

Data Bit 0

39

-DA/SP

18

Data Bit 15

40

Ground

19

Ground

41

+5 Volts (Logic) (Optional)

20

Cable Key (pin missing)

42

+5 Volts (Motor) (Optional)

21

DRQ 3

43

Ground (Optional)

22

Ground

44

-Type (Optional)

Note that the last four pins are only used by devices that require power through the ribbon cable. Typically, such devices are hard drives that are too small (for example, 2.5 inches) to need a separate power supply.

Masters and Slaves
A single IDE interface can support two devices. Most motherboards come with dual IDE interfaces (primary and secondary) for up to four IDE devices. Because the controller is integrated with the drive, there is no overall controller to decide which device is currently communicating with the computer. This is not a problem as long as each device is on a separate interface, but adding support for a second drive on the same cable took some ingenuity.

To allow for two drives on the same cable, IDE uses a special configuration called master and slave. This configuration allows one drive's controller to tell the other drive when it can transfer data to or from the computer. What happens is the slave drive makes a request to the master drive, which checks to see if it is currently communicating with the computer. If the master drive is idle, it tells the slave drive to go ahead. If the master drive is communicating with the computer, it tells the slave drive to wait and then informs it when it can go ahead.

The computer determines if there is a second (slave) drive attached through the use of Pin 39 on the connector. Pin 39 carries a special signal, called Drive Active/Slave Present (DASP), that checks to see if a slave drive is present.

Although it will work in either position, it is recommended that the master drive is attached to the connector at the very end of the IDE ribbon cable. Then, a jumper on the back of the drive next to the IDE connector must be set in the correct position to identify the drive as the master drive. The slave drive must have either the master jumper removed or a special slave jumper set, depending on the drive. Also, the slave drive is attached to the connector near the middle of the IDE ribbon cable. Each drive's controller board looks at the jumper setting to determine whether it is a slave or a master. This tells them how to perform. Every drive is capable of being either slave or master when you receive it from the manufacturer. If only one drive is installed, it should always be the master drive.

Many drives feature an option called Cable Select (CS). With the correct type of IDE ribbon cable, these drives can be auto configured as master or slave. CS works like this: A jumper on each drive is set to the CS option. The cable itself is just like a normal IDE cable except for one difference -- Pin 28 only connects to the master drive connector. When your computer is powered up, the IDE interface sends a signal along the wire for Pin 28. Only the drive attached to the master connector receives the signal. That drive then configures itself as the master drive. Since the other drive received no signal, it defaults to slave mode.

 

 

 

 

 

 

 

 

 

 

 

 

How Floppy Disk Drives Work

 

 

 

If you have spent any time at all working with a computer, then chances are good that you have used a floppy disk at some point. The floppy disk drive (FDD) was the primary means of adding data to a computer until the CD-ROM drive became popular. In fact, FDDs have been an key component of most personal computers for more than 20 years.

Basically, a floppy disk drive reads and writes data to a small, circular piece of metal-coated plastic similar to audio cassette tape. In this article, you will learn more about what is inside a floppy disk drive and how it works. You will also find out some cool facts about FDDs.

History of the Floppy Disk Drive
The floppy disk drive (FDD) was invented at IBM by Alan Shugart in 1967. The first floppy drives used an 8-inch disk (later called a "diskette" as it got smaller), which evolved into the 5.25-inch disk that was used on the first IBM Personal Computer in August 1981. The 5.25-inch disk held 360 kilobytes compared to the 1.44 megabyte capacity of today's 3.5-inch diskette.

The 5.25-inch disks were dubbed "floppy" because the diskette packaging was a very flexible plastic envelope, unlike the rigid case used to hold today's 3.5-inch diskettes.

By the mid-1980s, the improved designs of the read/write heads, along with improvements in the magnetic recording media, led to the less-flexible, 3.5-inch, 1.44-megabyte (MB) capacity FDD in use today. For a few years, computers had both FDD sizes (3.5-inch and 5.25-inch). But by the mid-1990s, the 5.25-inch version had fallen out of popularity, partly because the diskette's recording surface could easily become contaminated by fingerprints through the open access area.

Parts of a Floppy Disk Drive

Floppy Disk Drive Terminology

  • Floppy disk - Also called diskette. The common size is 3.5 inches.
  • Floppy disk drive - The electromechanical device that reads and writes floppy disks.
  • Track - Concentric ring of data on a side of a disk.
  • Sector - A subset of a track, similar to wedge or a slice of pie.

The Disk
A floppy disk is a lot like a cassette tape:

  • Both use a thin plastic base material coated with iron oxide. This oxide is a ferromagnetic material, meaning that if you expose it to a magnetic field it is permanently magnetized by the field.
  • Both can record information instantly.
  • Both can be erased and reused many times.
  • Both are very inexpensive and easy to use.

If you have ever used an audio cassette, you know that it has one big disadvantage -- it is a sequential device. The tape has a beginning and an end, and to move the tape to another song later in the sequence of songs on the tape you have to use the fast forward and rewind buttons to find the start of the song, since the tape heads are stationary. For a long audio cassette tape it can take a minute or two to rewind the whole tape, making it hard to find a song in the middle of the tape.

A floppy disk, like a cassette tape, is made from a thin piece of plastic coated with a magnetic material on both sides. However, it is shaped like a disk rather than a long thin ribbon. The tracks are arranged in concentric rings so that the software can jump from "file 1" to "file 19" without having to fast forward through files 2-18. The diskette spins like a record and the heads move to the correct track, providing what is known as direct access storage.


In the illustration above, you can see how the disk is divided into tracks (brown) and sectors (yellow).

The Drive
The major parts of a FDD include:

  • Read/Write Heads: Located on both sides of a diskette, they move together on the same assembly. The heads are not directly opposite each other in an effort to prevent interaction between write operations on each of the two media surfaces. The same head is used for reading and writing, while a second, wider head is used for erasing a track just prior to it being written. This allows the data to be written on a wider "clean slate," without interfering with the analog data on an adjacent track.
  • Drive Motor: A very small spindle motor engages the metal hub at the center of the diskette, spinning it at either 300 or 360 rotations per minute (RPM).
  • Stepper Motor: This motor makes a precise number of stepped revolutions to move the read/write head assembly to the proper track position. The read/write head assembly is fastened to the stepper motor shaft.
  • Mechanical Frame: A system of levers that opens the little protective window on the diskette to allow the read/write heads to touch the dual-sided diskette media. An external button allows the diskette to be ejected, at which point the spring-loaded protective window on the diskette closes.
  • Circuit Board: Contains all of the electronics to handle the data read from or written to the diskette. It also controls the stepper-motor control circuits used to move the read/write heads to each track, as well as the movement of the read/write heads toward the diskette surface.

The read/write heads do not touch the diskette media when the heads are traveling between tracks. Electronic optics check for the presence of an opening in the lower corner of a 3.5-inch diskette (or a notch in the side of a 5.25-inch diskette) to see if the user wants to prevent data from being written on it.


Click on the picture to see a brief video of a diskette being inserted. Look for the silver, sliding door opening up and the read/write heads being lowered to the diskette surface.

 


Read/write heads for each side of the diskette

Writing Data on a Floppy Disk
The following is an overview of how a floppy disk drive writes data to a floppy disk. Reading data is very similar. Here's what happens:

  1. The computer program passes an instruction to the computer hardware to write a data file on a floppy disk, which is very similar to a single platter in a hard disk drive except that it is spinning much slower, with far less capacity and slower access time.
  2. The computer hardware and the floppy-disk-drive controller start the motor in the diskette drive to spin the floppy disk.

The disk has many concentric tracks on each side. Each track is divided into smaller segments called sectors, like slices of a pie.

  1. A second motor, called a stepper motor, rotates a worm-gear shaft (a miniature version of the worm gear in a bench-top vise) in minute increments that match the spacing between tracks.

The time it takes to get to the correct track is called "access time." This stepping action (partial revolutions) of the stepper motor moves the read/write heads like the jaws of a bench-top vise. The floppy-disk-drive electronics know how many steps the motor has to turn to move the read/write heads to the correct track.

  1. The read/write heads stop at the track. The read head checks the prewritten address on the formatted diskette to be sure it is using the correct side of the diskette and is at the proper track. This operation is very similar to the way a record player automatically goes to a certain groove on a vinyl record.
  2. Before the data from the program is written to the diskette, an erase coil (on the same read/write head assembly) is energized to "clear" a wide, "clean slate" sector prior to writing the sector data with the write head. The erased sector is wider than the written sector -- this way, no signals from sectors in adjacent tracks will interfere with the sector in the track being written.
  3. The energized write head puts data on the diskette by magnetizing minute, iron, bar-magnet particles embedded in the diskette surface, very similar to the technology used in the mag stripe on the back of a credit card. The magnetized particles have their north and south poles oriented in such a way that their pattern may be detected and read on a subsequent read operation.
  4. The diskette stops spinning. The floppy disk drive waits for the next command.

On a typical floppy disk drive, the small indicator light stays on during all of the above operations.

Floppy Disk Drive Facts
Here are some interesting things to note about FDDs:

  • Two floppy disks do not get corrupted if they are stored together, due to the low level of magnetism in each one.
  • In your PC, there is a twist in the FDD data-ribbon cable -- this twist tells the computer whether the drive is an A-drive or a B-drive.
  • Like many household appliances, there are really no serviceable parts in today's FDDs. This is because the cost of a new drive is considerably less than the hourly rate typically charged to disassemble and repair a drive.
  • If you wish to redisplay the data on a diskette drive after changing a diskette, you can simply tap the F5 key (in most Windows applications).
  • In the corner of every 3.5-inch diskette, there is a small slider. If you uncover the hole by moving the slider, you have protected the data on the diskette from being written over or erased.
  • Floppy disks, while rarely used to distribute software (as in the past), are still used in these applications:
    • in some Sony digital cameras
    • for software recovery after a system crash or a virus attack
    • when data from one computer is needed on a second computer and the two computers are not networked
    • in bootable diskettes used for updating the BIOS on a personal computer
    • in high-density form, used in the popular Zip drive

 

 

 

Parallel Port Basics
Parallel ports were originally developed by IBM as a way to connect a printer to your PC. When IBM was in the process of designing the PC, the company wanted the computer to work with printers offered by Centronics, a top printer manufacturer at the time. IBM decided not to use the same port interface on the computer that Centronics used on the printer.

Instead, IBM engineers coupled a 25-pin connector, DB-25, with a 36-pin Centronics connector to create a special cable to connect the printer to the computer. Other printer manufacturers ended up adopting the Centronics interface, making this strange hybrid cable an unlikely de facto standard.

When a PC sends data to a printer or other device using a parallel port, it sends 8 bits of data (1 byte) at a time. These 8 bits are transmitted parallel to each other, as opposed to the same eight bits being transmitted serially (all in a single row) through a serial port. The standard parallel port is capable of sending 50 to 100 kilobytes of data per second.

Let's take a closer look at what each pin does when used with a printer:

  • Pin 1 carries the strobe signal. It maintains a level of between 2.8 and 5 volts, but drops below 0.5 volts whenever the computer sends a byte of data. This drop in voltage tells the printer that data is being sent.
  • Pins 2 through 9 are used to carry data. To indicate that a bit has a value of 1, a charge of 5 volts is sent through the correct pin. No charge on a pin indicates a value of 0. This is a simple but highly effective way to transmit digital information over an analog cable in real-time.
  • Pin 10 sends the acknowledge signal from the printer to the computer. Like Pin 1, it maintains a charge and drops the voltage below 0.5 volts to let the computer know that the data was received.
  • If the printer is busy, it will charge Pin 11. Then, it will drop the voltage below 0.5 volts to let the computer know it is ready to receive more data.
  • The printer lets the computer know if it is out of paper by sending a charge on Pin 12.
  • As long as the computer is receiving a charge on Pin 13, it knows that the device is online.

  • The computer sends an auto feed signal to the printer through Pin 14 using a 5-volt charge.
  • If the printer has any problems, it drops the voltage to less than 0.5 volts on Pin 15 to let the computer know that there is an error.
  • Whenever a new print job is ready, the computer drops the charge on Pin 16 to initialize the printer.
  • Pin 17 is used by the computer to remotely take the printer offline. This is accomplished by sending a charge to the printer and maintaining it as long as you want the printer offline.
  • Pins 18-25 are grounds and are used as a reference signal for the low (below 0.5 volts) charge.

Notice how the first 25 pins on the Centronics end match up with the pins of the first connector. With each byte the parallel port sends out, a handshaking signal is also sent so that the printer can latch the byte.

SPP/EPP/ECP
The original specification for parallel ports was unidirectional, meaning that data only traveled in one direction for each pin. With the introduction of the PS/2 in 1987, IBM offered a new bidirectional parallel port design. This mode is commonly known as Standard Parallel Port (SPP) and has completely replaced the original design. Bidirectional communication allows each device to receive data as well as transmit it. Many devices use the eight pins (2 through 9) originally designated for data. Using the same eight pins limits communication to half-duplex, meaning that information can only travel in one direction at a time. But pins 18 through 25, originally just used as grounds, can be used as data pins also. This allows for full-duplex (both directions at the same time) communication.

Enhanced Parallel Port (EPP) was created by Intel, Xircom and Zenith in 1991. EPP allows for much more data, 500 kilobytes to 2 megabytes, to be transferred each second. It was targeted specifically for non-printer devices that would attach to the parallel port, particularly storage devices that needed the highest possible transfer rate.

Close on the heels of the introduction of EPP, Microsoft and Hewlett Packard jointly announced a specification called Extended Capabilities Port (ECP) in 1992. While EPP was geared toward other devices, ECP was designed to provide improved speed and functionality for printers.

In 1994, the IEEE 1284 standard was released. It included the two specifications for parallel port devices, EPP and ECP. In order for them to work, both the operating system and the device must support the required specification. This is seldom a problem today since most computers support SPP, ECP and EPP and will detect which mode needs to be used, depending on the attached device. If you need to manually select a mode, you can do so through the BIOS on most computers

 

 

How Serial Ports Work

 

 

 

Considered to be one of the most basic external connections to a computer, the serial port has been an integral part of most computers for more than 20 years. Although many of the newer systems have done away with the serial port completely in favor of USB connections, most modems still use the serial port, as do some printers, PDAs and digital cameras. Few computers have more than two serial ports.


Two serial ports on the back of a PC

Essentially, serial ports provide a standard connector and protocol to let you attach devices, such as modems, to your computer. In this edition of How Stuff Works, you will learn about the difference between a parallel port and a serial port, what each pin does and what flow control is.

UART Needed
All computer operating systems in use today support serial ports, because serial ports have been around for decades. Parallel ports are a more recent invention and are much faster than serial ports. USB ports are only a few years old, and will likely replace both serial and parallel ports completely over the next several years.

The name "serial" comes from the fact that a serial port "serializes" data. That is, it takes a byte of data and transmits the 8 bits in the byte one at a time. The advantage is that a serial port needs only one wire to transmit the 8 bits (while a parallel port needs 8). The disadvantage is that it takes 8 times longer to transmit the data than it would if there were 8 wires. Serial ports lower cable costs and make cables smaller.

Before each byte of data, a serial port sends a start bit, which is a single bit with a value of 0. After each byte of data, it sends a stop bit to signal that the byte is complete. It may also send a parity bit.

Serial ports, also called communication (COM) ports, are bi-directional. Bi-directional communication allows each device to receive data as well as transmit it. Serial devices use different pins to receive and transmit data -- using the same pins would limit communication to half-duplex, meaning that information could only travel in one direction at a time. Using different pins allows for full-duplex communication, in which information can travel in both directions at once.


This 40-pin Dual Inline Package (DIP) chip is a variation of the National Semiconductor NS16550D UART chip.

Serial ports rely on a special controller chip, the Universal Asynchronous Receiver/Transmitter (UART), to function properly. The UART chip takes the parallel output of the computer's system bus and transforms it into serial form for transmission through the serial port. In order to function faster, most UART chips have a built-in buffer of anywhere from 16 to 64 kilobytes. This buffer allows the chip to cache data coming in from the system bus while it is processing data going out to the serial port. While most standard serial ports have a maximum transfer rate of 115 Kbps (kilobits per second), high speed serial ports, such as Enhanced Serial Port (ESP) and Super Enhanced Serial Port (Super ESP), can reach data transfer rates of 460 Kbps.

The Serial Connection
The external connector for a serial port can be either 9 pins or 25 pins. Originally, the primary use of a serial port was to connect a modem to your computer. The pin assignments reflect that. Let's take a closer look at what happens at each pin when a modem is connected.


Close-up of 9-pin and 25-pin serial connectors

9-pin connector:

  1. Carrier Detect - Determines if the modem is connected to a working phone line.
  2. Receive Data - Computer receives information sent from the modem.
  3. Transmit Data - Computer sends information to the modem.
  4. Data Terminal Ready - Computer tells the modem that it is ready to talk.
  5. Signal Ground - Pin is grounded.
  6. Data Set Ready - Modem tells the computer that it is ready to talk.
  7. Request To Send - Computer asks the modem if it can send information.
  8. Clear To Send - Modem tells the computer that it can send information.
  9. Ring Indicator - Once a call has been placed, computer acknowledges signal (sent from modem) that a ring is detected.

25-pin connector:

  1. Not Used
  2. Transmit Data - Computer sends information to the modem.
  3. Receive Data - Computer receives information sent from the modem.
  4. Request To Send - Computer asks the modem if it can send information.
  5. Clear To Send - Modem tells the computer that it can send information.
  6. Data Set Ready - Modem tells the computer that it is ready to talk.
  7. Signal Ground - Pin is grounded.
  8. Received Line Signal Detector - Determines if the modem is connected to a working phone line.
  9. Not Used: Transmit Current Loop Return (+)
  10. Not Used
  11. Not Used: Transmit Current Loop Data (-)
  12. Not Used
  13. Not Used
  14. Not Used
  15. Not Used
  16. Not Used
  17. Not Used
  18. Not Used: Receive Current Loop Data (+)
  19. Not Used
  20. Data Terminal Ready - Computer tells the modem that it is ready to talk.
  21. Not Used
  22. Ring Indicator - Once a call has been placed, computer acknowledges signal (sent from modem) that a ring is detected.
  23. Not Used
  24. Not Used
  25. Not Used: Receive Current Loop Return (-)

Voltage sent over the pins can be in one of two states, On or Off. On (binary value "1") means that the pin is transmitting a signal between -3 and -25 volts, while Off (binary value "0") means that it is transmitting a signal between +3 and +25 volts...

Going With The Flow
An important aspect of serial communications is the concept of flow control. This is the ability of one device to tell another device to stop sending data for a while. The commands Request to Send (RTS), Clear To Send (CTS), Data Terminal Ready (DTR) and Data Set Ready (DSR) are used to enable flow control.


A dual serial port card

Let's look at an example of how flow control works: You have a modem that communicates at 56 Kbps. The serial connection between your computer and your modem transmits at 115 Kbps, which is over twice as fast. This means that the modem is getting more data coming from the computer than it can transmit over the phone line. Even if the modem has a 128K buffer to store data in, it will still quickly run out of buffer space and be unable to function properly with all that data streaming in.

With flow control, the modem can stop the flow of data from the computer before it overruns the modem's buffer. The computer is constantly sending a signal on the Request to Send pin, and checking for a signal on the Clear to Send pin. If there is no Clear to Send response, the computer stops sending data, waiting for the Clear to Send before it resumes. This allows the modem to keep the flow of data running smoothly.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

How USB Ports Work

 

 

 

Just about any computer that you buy today comes with one or more Universal Serial Bus connectors on the back. These USB connectors let you attach everything from mice to printers to your computer quickly and easily. The operating system supports USB as well, so the installation of the device drivers is quick and easy, too. Compared to other ways of connecting devices to your computer (including parallel ports, serial ports and special cards that you install inside the computer's case), USB devices are incredibly simple!

In this article, we will look at USB ports from both a user and a technical standpoint. You will learn why the USB system is so flexible and how it is able to support so many devices so easily -- it's truly an amazing system!

What is USB?
Anyone who has been around computers for more than two or three years knows the problem that the Universal Serial Bus is trying to solve -- in the past, connecting devices to computers has been a real headache!

  • Printers connected to parallel printer ports, and most computers only came with one. Things like Zip drives, which need a high-speed connection into the computer, would use the parallel port as well, often with limited success and not much speed.
  • Modems used the serial port, but so did some printers and a variety of odd things like Palm Pilots and digital cameras. Most computers have at most two serial ports, and they are very slow in most cases.
  • Devices that needed faster connections came with their own cards, which had to fit in a card slot inside the computer's case. Unfortunately, the number of card slots is limited and you needed a Ph.D. to install the software for some of the cards.

The goal of USB is to end all of these headaches. The Universal Serial Bus gives you a single, standardized, easy-to-use way to connect up to 127 devices to a computer.

Just about every peripheral made now comes in a USB version. A sample list of USB devices that you can buy today includes:

USB Connections
Connecting a USB device to a computer is simple -- you find the USB connector on the back of your machine and plug the USB connector into it.


The rectangular socket is a typical USB socket on the back of a PC.

 


A typical USB connector, called an "A" connection

If it is a new device, the operating system auto-detects it and asks for the driver disk. If the device has already been installed, the computer activates it and starts talking to it. USB devices can be connected and disconnected at any time.

Many USB devices come with their own built-in cable, and the cable has an "A" connection on it. If not, then the device has a socket on it that accepts a USB "B" connector.


A typical "B" connection

The USB standard uses "A" and "B" connectors to avoid confusion:

  • "A" connectors head "upstream" toward the computer.
  • "B" connectors head "downstream" and connect to individual devices.

By using different connectors on the upstream and downstream end, it is impossible to ever get confused -- if you connect any USB cable's "B" connector into a device, you know that it will work. Similarly, you can plug any "A" connector into any "A" socket and know that it will work.

Running Out of Ports?
Most computers that you buy today come with one or two USB sockets. With so many USB devices on the market today, you easily run out of sockets very quickly. For example, on the computer that I am typing on right now, I have a USB printer, a USB scanner, a USB Webcam and a USB network connection. My computer has only one USB connector on it, so the obvious question is, "How do you hook up all the devices?"

The easy solution to the problem is to buy an inexpensive USB hub. The USB standard supports up to 127 devices, and USB hubs are a part of the standard.


A typical USB four-port hub accepts 4 "A" connections.

A hub typically has four new ports, but may have many more. You plug the hub into your computer, and then plug your devices (or other hubs) into the hub. By chaining hubs together, you can build up dozens of available USB ports on a single computer.

Hubs can be powered or unpowered. As you will see on the next page, the USB standard allows for devices to draw their power from their USB connection. Obviously, a high-power device like a printer or scanner will have its own power supply, but low-power devices like mice and digital cameras get their power from the bus in order to simplify them. The power (up to 500 milliamps at 5 volts) comes from the computer. If you have lots of self-powered devices (like printers and scanners), then your hub does not need to be powered -- none of the devices connecting to the hub needs additional power, so the computer can handle it. If you have lots of unpowered devices like mice and cameras, you probably need a powered hub. The hub has its own transformer and it supplies power to the bus so that the devices do not overload the computer's supply.

USB Features
The Universal Serial Bus has the following features:

  • The computer acts as the host.
  • Up to 127 devices can connect to the host, either directly or by way of USB hubs.
  • Individual USB cables can run as long as 5 meters; with hubs, devices can be up to 30 meters (six cables' worth) away from the host.
  • With USB 2.,the bus has a maximum data rate of 480 megabits per second.
  • A USB cable has two wires for power (+5 volts and ground) and a twisted pair of wires to carry the data.
  • On the power wires, the computer can supply up to 500 milliamps of power at 5 volts.
  • Low-power devices (such as mice) can draw their power directly from the bus. High-power devices (such as printers) have their own power supplies and draw minimal power from the bus. Hubs can have their own power supplies to provide power to devices connected to the hub.
  • USB devices are hot-swappable, meaning you can plug them into the bus and unplug them any time.
  • Many USB devices can be put to sleep by the host computer when the computer enters a power-saving mode.

The devices connected to a USB port rely on the USB cable to carry power and data.


Inside a USB cable: There are two wires for power -- +5 volts (red) and ground (brown) -- and a twisted pair (yellow and blue) of wires to carry the data. The cable is also shielded.

The USB Process
When the host powers up, it queries all of the devices connected to the bus and assigns each one an address. This process is called enumeration -- devices are also enumerated when they connect to the bus. The host also finds out from each device what type of data transfer it wishes to perform:

  • Interrupt - A device like a mouse or a keyboard, which will be sending very little data, would choose the interrupt mode.
  • Bulk - A device like a printer, which receives data in one big packet, uses the bulk transfer mode. A block of data is sent to the printer (in 64-byte chunks) and verified to make sure it is correct.
  • Isochronous - A streaming device (such as speakers) uses the isochronous mode. Data streams between the device and the host in real-time, and there is no error correction.

The host can also send commands or query parameters with control packets.

As devices are enumerated, the host is keeping track of the total bandwidth that all of the isochronous and interrupt devices are requesting. They can consume up to 90 percent of the 480 Mbps of bandwidth that is available. After 90 percent is used up, the host denies access to any other isochronous or interrupt devices. Control packets and packets for bulk transfers use any bandwidth left over (at least 10 percent).

The Universal Serial Bus divides the available bandwidth into frames, and the host controls the frames. Frames contain 1,500 bytes, and a new frame starts every millisecond. During a frame, isochronous and interrupt devices get a slot so they are guaranteed the bandwidth they need. Bulk and control transfers use whatever space is left. The technical links at the end of the article contain lots of detail if you would like to learn more.

USB 2.0
The standard for USB version 2.0 was released in April 2000 and serves as an upgrade for USB 1.1.

USB 2.0 (High-speed USB) provides additional bandwidth for multimedia and storage applications and has a data transmission speed 40 times faster than USB 1.1. To allow a smooth transition for both consumers and manufacturers, USB 2.0 has full forward and backward compatibility with original USB devices and works with cables and connectors made for original USB, too.

Supporting three speed modes (1.5, 12 and 480 megabits per second), USB 2.0 supports low-bandwidth devices such as keyboards and mice, as well as high-bandwidth ones like high-resolution Webcams, scanners, printers and high-capacity storage systems. The deployment of USB 2.0 has allowed PC industry leaders to forge ahead with the development of next-generation PC peripherals to complement existing high-performance PCs. The transmission speed of USB 2.0 also facilitates the development of next-generation PCs and applications. In addition to improving functionality and encouraging innovation, USB 2.0 increases the productivity of user applications and allows the user to run multiple PC applications at once or several high-performance peripherals simultaneously.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

How Laser Printers Work

 

 

 


Hewlett Packard LaserJet 4050T

The term inkjet printer is very descriptive of the process at work -- these printers put an image on paper using tiny jets of ink. The term laser printer, on the other hand, is a bit more mysterious -- how can a laser beam, a highly focused beam of light, write letters and draw pictures on paper?

In this article, we'll unravel the mystery behind the laser printer, tracing a page's path from the characters on your computer screen to printed letters on paper. As it turns out, the laser printing process is based on some very basic scientific principles applied in an exceptionally innovative way.

The Basics: Static Electricity
The primary principle at work in a laser printer is static electricity, the same energy that makes clothes in the dryer stick together or a lightning bolt travel from a thundercloud to the ground. Static electricity is simply an electrical charge built up on an insulated object, such as a balloon or your body. Since oppositely charged atoms are attracted to each other, objects with opposite static electricity fields cling together.


The path of a piece of paper through a laser printer

A laser printer uses this phenomenon as a sort of "temporary glue." The core component of this system is the photoreceptor, typically a revolving drum or cylinder. This drum assembly is made out of highly photoconductive material that is discharged by light photons.


The basic components of a laser printer

The Basics: Drum
Initially, the drum is given a total positive charge by the charge corona wire, a wire with an electrical current running through it. (Some printers use a charged roller instead of a corona wire, but the principle is the same.) As the drum revolves, the printer shines a tiny laser beam across the surface to discharge certain points. In this way, the laser "draws" the letters and images to be printed as a pattern of electrical charges -- an electrostatic image. The system can also work with the charges reversed -- that is, a positive electrostatic image on a negative background.


The laser "writes" on a photoconductive revolving drum.

After the pattern is set, the printer coats the drum with positively charged toner -- a fine, black powder. Since it has a positive charge, the toner clings to the negative discharged areas of the drum, but not to the positively charged "background." This is something like writing on a soda can with glue and then rolling it over some flour: The flour only sticks to the glue-coated part of the can, so you end up with a message written in powder.

With the powder pattern affixed, the drum rolls over a sheet of paper, which is moving along a belt below. Before the paper rolls under the drum, it is given a negative charge by the transfer corona wire (charged roller). This charge is stronger than the negative charge of the electrostatic image, so the paper can pull the toner powder away. Since it is moving at the same speed as the drum, the paper picks up the image pattern exactly. To keep the paper from clinging to the drum, it is discharged by the detac corona wire immediately after picking up the toner.


The basic components of a laser printer

The Basics: Fuser
Finally, the printer passes the paper through the fuser, a pair of heated rollers. As the paper passes through these rollers, the loose toner powder melts, fusing with the fibers in the paper. The fuser rolls the paper to the output tray, and you have your finished page. The fuser also heats up the paper itself, of course, which is why pages are always hot when they come out of a laser printer or photocopier.

So what keeps the paper from burning up? Mainly, speed -- the paper passes through the rollers so quickly that it doesn't get very hot.

After depositing toner on the paper, the drum surface passes the discharge lamp. This bright light exposes the entire photoreceptor surface, erasing the electrical image. The drum surface then passes the charge corona wire, which reapplies the positive charge.


The basic components of a laser printer

Conceptually, this is all there is to it. Of course, actually bringing everything together is a lot more complex. In the following sections, we'll examine the different components in greater detail to see how they produce text and images so quickly and precisely.

The Controller: The Conversation
Before a laser printer can do anything else, it needs to receive the page data and figure out how it's going to put everything on the paper. This is the job of the printer controller.

The printer controller is the laser printer's main onboard computer. It talks to the host computer (for example, your PC) through a communications port, such as a parallel port or USB port. At the start of the printing job, the laser printer establishes with the host computer how they will exchange data. The controller may have to start and stop the host computer periodically to process the information it has received.


A typical laser printer has a few different types of communications ports.

In an office, a laser printer will probably be connected to several separate host computers, so multiple users can print documents from their machine. The controller handles each one separately, but may be carrying on many "conversations" concurrently. This ability to handle several jobs at once is one of the reasons why laser printers are so popular.

The Controller: The Language
For the printer controller and the host computer to communicate, they need to speak the same page description language. In earlier printers, the computer sent a special sort of text file and a simple code giving the printer some basic formatting information. Since these early printers had only a few fonts, this was a very straightforward process.

These days, you might have hundreds of different fonts to choose from, and you wouldn't think twice about printing a complex graphic. To handle all of this diverse information, the printer needs to speak a more advanced language.

The primary printer languages these days are Hewlett Packard's Printer Command Language (PCL) and Adobe's Postscript. Both of these languages describe the page in vector form -- that is, as mathematical values of geometric shapes, rather than as a series of dots (a bitmap image). The printer itself takes the vector images and converts them into a bitmap page. With this system, the printer can receive elaborate, complex pages, featuring any sort of font or image. Also, since the printer creates the bitmap image itself, it can use its maximum printer resolution.

Some printers use a graphical device interface (GDI) format instead of a standard PCL. In this system, the host computer creates the dot array itself, so the controller doesn't have to process anything -- it just sends the dot instructions on to the laser.

But in most laser printers, the controller must organize all of the data it receives from the host computer. This includes all of the commands that tell the printer what to do -- what paper to use, how to format the page, how to handle the font, etc. For the controller to work with this data, it has to get it in the right order.

The Controller: Setting up the Page
Once the data is structured, the controller begins putting the page together. It sets the text margins, arranges the words and places any graphics. When the page is arranged, the raster image processor (RIP) takes the page data, either as a whole or piece by piece, and breaks it down into an array of tiny dots. As we'll see in the next section, the printer needs the page in this form so the laser can write it out on the photoreceptor drum.

In most laser printers, the controller saves all print-job data in its own memory. This lets the controller put different printing jobs into a queue so it can work through them one at a time. It also saves time when printing multiple copies of a document, since the host computer only has to send the data once.

The Laser Assembly
Since it actually draws the page, the printer's laser system -- or laser scanning assembly -- must be incredibly precise. The traditional laser scanning assembly includes:

  • A laser
  • A movable mirror
  • A lens

The laser receives the page data -- the tiny dots that make up the text and images -- one horizontal line at a time. As the beam moves across the drum, the laser emits a pulse of light for every dot to be printed, and no pulse for every dot of empty space.

The laser doesn't actually move the beam itself. It bounces the beam off a movable mirror instead. As the mirror moves, it shines the beam through a series of lenses. This system compensates for the image distortion caused by the varying distance between the mirror and points along the drum.

Writing the Page
The laser assembly moves in only one plane, horizontally. After each horizontal scan, the printer moves the photoreceptor drum up a notch so the laser assembly can draw the next line. A small print-engine computer synchronizes all of this perfectly, even at dizzying speeds.

Some laser printers use a strip of light emitting diodes (LEDs) to write the page image, instead of a single laser. Each dot position has its own dedicated light, which means the printer has one set print resolution. These systems cost less to manufacture than true laser assemblies, but they produce inferior results. Typically, you'll only find them in less expensive printers.

Photocopiers

Laser printers work the same basic way as photocopiers, with a few significant differences. The most obvious difference is the source of the image: A photocopier scans an image by reflecting a bright light off of it, while a laser printer receives the image in digital form.

Another major difference is how the electrostatic image is created. When a photocopier bounces light off a piece of paper, the light reflects back onto the photoreceptor from the white areas but is absorbed by the dark areas. In this process, the "background" is discharged, while the electrostatic image retains a positive charge. This method is called "write-white."

In most laser printers, the process is reversed: The laser discharges the lines of the electrostatic image and leaves the background positively charged. In a printer, this "write-black" system is easier to implement than a "write-white" system, and it generally produces better results.

Toner Basics
One of the most distinctive things about a laser printer (or photocopier) is the toner. It's such a strange concept for the paper to grab the "ink" rather than the printer applying it. And it's even stranger that the "ink" isn't really ink at all.

So what is toner? The short answer is: It's an electrically-charged powder with two main ingredients: pigment and plastic.

The role of the pigment is fairly obvious -- it provides the coloring (black, in a monochrome printer) that fills in the text and images. This pigment is blended into plastic particles, so the toner will melt when it passes through the heat of the fuser. This quality gives toner a number of advantages over liquid ink. Chiefly, it firmly binds to the fibers in almost any type of paper, which means the text won't smudge or bleed easily.


Photo courtesy Xerox
A developer bead coated with small toner particles

Applying Toner
So how does the printer apply this toner to the electrostatic image on the drum? The powder is stored in the toner hopper, a small container built into a removable casing. The printer gathers the toner from the hopper with the developer unit. The "developer" is actually a collection of small, negatively charged magnetic beads. These beads are attached to a rotating metal roller, which moves them through the toner in the toner hopper.

Because they are negatively charged, the developer beads collect the positive toner particles as they pass through. The roller then brushes the beads past the drum assembly. The electrostatic image has a stronger negative charge than the developer beads, so the drum pulls the toner particles away.


In a lot of printers, the toner hopper, developer and drum assembly are combined in one replaceable cartridge.

The drum then moves over the paper, which has an even stronger charge and so grabs the toner. After collecting the toner, the paper is immediately discharged by the detac corona wire. At this point, the only thing keeping the toner on the page is gravity -- if you were to blow on the page, you would completely lose the image. The page must pass through the fuser to affix the toner. The fuser rollers are heated by internal quartz tube lamps, so the plastic in the toner melts as it passes through.

But what keeps the toner from collecting on the fuser rolls, rather than sticking to the page? To keep this from happening, the fuser rolls must be coated with Teflon, the same non-stick material that keeps your breakfast from sticking to the bottom of the frying pan.

Color Printers
Initially, most commercial laser printers were limited to monochrome printing (black writing on white paper). But now, there are lots of color laser printers on the market.

Essentially, color printers work the same way as monochrome printers, except they go through the entire printing process four times -- one pass each for cyan (blue), magenta (red), yellow and black. By combining these four colors of toner in varying proportions, you can generate the full spectrum of color.


Inside a color laser printer

There are several different ways of doing this. Some models have four toner and developer units on a rotating wheel. The printer lays down the electrostatic image for one color and puts that toner unit into position. It then applies this color to the paper and goes through the process again for the next color. Some printers add all four colors to a plate before placing the image on paper.

Some more expensive printers actually have a complete printer unit -- a laser assembly, a drum and a toner system -- for each color. The paper simply moves past the different drum heads, collecting all the colors in a sort of assembly line.

Advantages of a Laser
So why get a laser printer rather than a cheaper inkjet printer? The main advantages of laser printers are speed, precision and economy. A laser can move very quickly, so it can "write" with much greater speed than an ink jet. And because the laser beam has an unvarying diameter, it can draw more precisely, without spilling any excess ink.

Laser printers tend to be more expensive than inkjet printers, but it doesn't cost as much to keep them running -- toner powder is cheap and lasts a long time, while you can use up expensive ink cartridges very quickly. This is why offices typically use a laser printer as their "work horse," their machine for printing long text documents. In most models, this mechanical efficiency is complemented by advanced processing efficiency. A typical laser-printer controller can serve everybody in a small office.

When they were first introduced, laser printers were too expensive to use as a personal printer. Since that time, however, laser printers have gotten much more affordable. Now you can pick up a basic model for just a little bit more than a nice inkjet printer.

As technology advances, laser-printer prices should continue to drop, while performance improves. We'll also see a number of innovative design variations, and possibly brand-new applications of electrostatic printing. Many inventors believe we've only scratched the surface of what we can do with simple static electricity!

 

 

 

How Inkjet Printers Work

by Jeff Tyson

 

 

No matter where you are reading this article from, you most likely have a printer nearby. And there's a very good chance that it is an inkjet printer. Since their introduction in the latter half of the 1980s, inkjet printers have grown in popularity and performance while dropping significantly in price.


An inexpensive color inkjet printer made by Hewlett Packard

An inkjet printer is any printer that places extremely small droplets of ink onto paper to create an image. If you ever look at a piece of paper that has come out of an inkjet printer, you know that:

  • The dots are extremely small (usually between 50 and 60 microns in diameter), so small that they are tinier than the diameter of a human hair (70 microns)!
  • The dots are positioned very precisely, with resolutions of up to 1440x720 dots per inch (dpi).
  • The dots can have different colors combined together to create photo-quality images.

In this edition of HowStuffWorks, you will learn about the various parts of an inkjet printer and how these parts work together to create an image. You will also learn about the ink cartridges and the special paper some inkjet printers use.

First, let's take a quick look at the various printer technologies.

Impact vs. Non-impact
There are several major printer technologies available. These technologies can be broken down into two main categories with several types in each:

  • Impact - These printers have a mechanism that touches the paper in order to create an image. There are two main impact technologies:
    • Dot matrix printers use a series of small pins to strike a ribbon coated with ink, causing the ink to transfer to the paper at the point of impact.
    • Character printers are basically computerized typewriters. They have a ball or series of bars with actual characters (letters and numbers) embossed on the surface. The appropriate character is struck against the ink ribbon, transferring the character's image to the paper. Character printers are fast and sharp for basic text, but very limited for other use.
  • Non-impact - These printers do not touch the paper when creating an image. Inkjet printers are part of this group, which includes:
    • Inkjet printers, which are described in this article, use a series of nozzles to spray drops of ink directly on the paper.
    • Laser printers, covered in-depth in How Laser Printers Work, use dry ink (toner), static electricity, and heat to place and bond the ink onto the paper.


A Hewlett Packard LaserJet 4050T

    • Solid ink printers contain sticks of wax-like ink that are melted and applied to the paper. The ink then hardens in place.
    • Dye-sublimation printers have a long roll of transparent film that resembles sheets of red-, blue-, yellow- and gray-colored cellophane stuck together end to end. Embedded in this film are solid dyes corresponding to the four basic colors used in printing: cyan, magenta, yellow and black (CMYK). The print head uses a heating element that varies in temperature, depending on the amount of a particular color that needs to be applied. The dyes vaporize and permeate the glossy surface of the paper before they return to solid form. The printer does a complete pass over the paper for each of the basic colors, gradually building the image.
    • Thermal wax printers are something of a hybrid of dye-sublimation and solid ink technologies. They use a ribbon with alternating CMYK color bands. The ribbon passes in front of a print head that has a series of tiny heated pins. The pins cause the wax to melt and adhere to the paper, where it hardens in place.
    • Thermal autochrome printers have the color in the paper instead of in the printer. There are three layers (cyan, magenta and yellow) in the paper, and each layer is activated by the application of a specific amount of heat. The print head has a heating element that can vary in temperature. The print head passes over the paper three times, providing the appropriate temperature for each color layer as needed.

Out of all of these incredible technologies, inkjet printers are by far the most popular. In fact, the only technology that comes close today is laser printers.

So, let's take a closer look at what's inside an inkjet printer.

Inside an Inkjet Printer
Parts of a typical inkjet printer include:

  • Print head assembly
    • Print head - The core of an inkjet printer, the print head contains a series of nozzles that are used to spray drops of ink.


The print head assembly

    • Ink cartridges - Depending on the manufacturer and model of the printer, ink cartridges come in various combinations, such as separate black and color cartridges, color and black in a single cartridge or even a cartridge for each ink color. The cartridges of some inkjet printers include the print head itself.
    • Print head stepper motor - A stepper motor moves the print head assembly (print head and ink cartridges) back and forth across the paper. Some printers have another stepper motor to park the print head assembly when the printer is not in use. Parking means that the print head assembly is restricted from accidentally moving, like a parking brake on a car.


Stepper motors like this one control the movement of most parts of an inkjet printer.

    • Belt - A belt is used to attach the print head assembly to the stepper motor.
    • Stabilizer bar - The print head assembly uses a stabilizer bar to ensure that movement is precise and controlled.


Here you can see the stabilizer bar and belt.

  • Paper feed assembly
    • Paper tray/feeder - Most inkjet printers have a tray that you load the paper into. Some printers dispense with the standard tray for a feeder instead. The feeder typically snaps open at an angle on the back of the printer, allowing you to place paper in it. Feeders generally do not hold as much paper as a traditional paper tray.
    • Rollers - A set of rollers pull the paper in from the tray or feeder and advance the paper when the print head assembly is ready for another pass.


The rollers move the paper through the printer.

    • Paper feed stepper motor - This stepper motor powers the rollers to move the paper in the exact increment needed to ensure a continuous image is printed.
  • Power supply - While earlier printers often had an external transformer, most printers sold today use a standard power supply that is incorporated into the printer itself.
  • Control circuitry - A small but sophisticated amount of circuitry is built into the printer to control all the mechanical aspects of operation, as well as decode the information sent to the printer from the computer.


The mechanical operation of the printer is controlled by a small circuit board containing a microprocessor and memory.

  • Interface port(s) - The parallel port is still used by many printers, but most newer printers use the USB port. A few printers connect using a serial port or small computer system interface (SCSI) port.


While USB taking over, many printers still use a parallel port.

Heat vs. Vibration
Different types of inkjet printers form their droplets of ink in different ways. There are two main inkjet technologies currently used by printer manufacturers:


View of the nozzles on a thermal bubble inkjet print head

  • Thermal bubble - Used by manufacturers such as Canon and Hewlett Packard, this method is commonly referred to as bubble jet. In a thermal inkjet printer, tiny resistors create heat, and this heat vaporizes ink to create a bubble. As the bubble expands, some of the ink is pushed out of a nozzle onto the paper. When the bubble "pops" (collapses), a vacuum is created. This pulls more ink into the print head from the cartridge. A typical bubble jet print head has 300 or 600 tiny nozzles, and all of them can fire a droplet simultaneously.


Click the button to see how a thermal bubble inkjet printer works.

  • Piezoelectric - Patented by Epson, this technology uses piezo crystals. A crystal is located at the back of the ink reservoir of each nozzle. The crystal receives a tiny electric charge that causes it to vibrate. When the crystal vibrates inward, it forces a tiny amount of ink out of the nozzle. When it vibrates out, it pulls some more ink into the reservoir to replace the ink sprayed out.


Click on the button to see how a piezoelectric inkjet printer works.

Let's walk through the printing process to see just what happens.

Click "OK" to Print
When you click on a button to print, there is a sequence of events that take place:

  1. The software application you are using sends the data to be printed to the printer driver.
  2. The driver translates the data into a format that the printer can understand and checks to see that the printer is online and available to print.
  3. The data is sent by the driver from the computer to the printer via the connection interface (parallel, USB, etc.).
  4. The printer receives the data from the computer. It stores a certain amount of data in a buffer. The buffer can range from 512 KB random access memory (RAM) to 16 MB RAM, depending on the model. Buffers are useful because they allow the computer to finish with the printing process quickly, instead of having to wait for the actual page to print. A large buffer can hold a complex document or several basic documents.
  5. If the printer has been idle for a period of time, it will normally go through a short clean cycle to make sure that the print head(s) are clean. Once the clean cycle is complete, the printer is ready to begin printing.
  6. The control circuitry activates the paper feed stepper motor. This engages the rollers, which feed a sheet of paper from the paper tray/feeder into the printer. A small trigger mechanism in the tray/feeder is depressed when there is paper in the tray or feeder. If the trigger is not depressed, the printer lights up the "Out of Paper" LED and sends an alert to the computer.
  7. Once the paper is fed into the printer and positioned at the start of the page, the print head stepper motor uses the belt to move the print head assembly across the page. The motor pauses for the merest fraction of a second each time that the print head sprays dots of ink on the page and then moves a tiny bit before stopping again. This stepping happens so fast that it seems like a continuous motion.
  8. Multiple dots are made at each stop. It sprays the CMYK colors in precise amounts to make any other color imaginable.
  9. At the end of each complete pass, the paper feed stepper motor advances the paper a fraction of an inch. Depending on the inkjet model, the print head is reset to the beginning side of the page, or, in most cases, simply reverses direction and begins to move back across the page as it prints.
  10. This process continues until the page is printed. The time it takes to print a page can vary widely from printer to printer. It will also vary based on the complexity of the page and size of any images on the page. For example, a printer may be able to print 16 pages per minute (PPM) of black text but take a couple of minutes to print one, full-color, page-sized image.
  11. Once the printing is complete, the print head is parked. The paper feed stepper motor spins the rollers to finish pushing the completed page into the output tray. Most printers today use inks that are very fast-drying, so that you can immediately pick up the sheet without smudging it.

In the next section, you will learn a little more about the ink cartridges and the paper used.

Paper and Ink
Inkjet printers are fairly inexpensive. They cost less than a typical black-and-white laser printer, and much less than a color laser printer. In fact, quite a few of the manufacturers sell some of their printers at a loss. Quite often, you can find the printer on sale for less than you would pay for a set of the ink cartridges!


This printer sells for
less than $100.

Why would they do this? Because they count on the supplies you purchase to provide their profit. This is very similar to the way the video game business works. The hardware is sold at or below cost. Once you buy a particular brand of hardware, then you must buy the other products that work with that hardware. In other words, you can't buy a printer from Manufacturer A and ink cartridges from Manufacturer B. They will not work together.


A typical color ink cartridge:
This cartridge has cyan, magenta and yellow inks in separate reservoirs.

Another way that they have reduced costs is by incorporating much of the actual print head into the cartridge itself. The manufacturers believe that since the print head is the part of the printer that is most likely to wear out, replacing it every time you replace the cartridge increases the life of the printer.

The paper you use on an inkjet printer greatly determines the quality of the image. Standard copier paper works, but doesn't provide as crisp and bright an image as paper made for an inkjet printer. There are two main factors that affect image quality:

  • Brightness
  • Absorption

The brightness of a paper is normally determined by how rough the surface of the paper is. A course or rough paper will scatter light in several directions, whereas a smooth paper will reflect more of the light back in the same direction. This makes the paper appear brighter, which in turn makes any image on the paper appear brighter. You can see this yourself by comparing a photo in a newspaper with a photo in a magazine. The smooth paper of the magazine page reflects light back to your eye much better than the rough texture of the newspaper. Any paper that is listed as being bright is generally a smoother-than-normal paper.

The other key factor in image quality is absorption. When the ink is sprayed onto the paper, it should stay in a tight, symmetrical dot. The ink should not be absorbed too much into the paper. If that happens, the dot will begin to feather. This means that it will spread out in an irregular fashion to cover a slightly larger area than the printer expects it to. The result is an page that looks somewhat fuzzy, particularly at the edges of objects and text.


Imagine that the dot on the left is on coated paper and the dot on the right is on low-grade copier paper. Notice how irregular and larger the right dot is compared to the left one.

As stated, feathering is caused by the paper absorbing the ink. To combat this, high-quality inkjet paper is coated with a waxy film that keeps the ink on the surface of the paper. Coated paper normally yields a dramatically better print than other paper. The low absorption of coated paper is key to the high resolution capabilities of many of today's inkjet printers. For example, a typical Epson inkjet printer can print at a resolution of up to 720x720 dpi on standard paper. With coated paper, the resolution increases to 1440x720 dpi. The reason is that the printer can actually shift the paper slightly and add a second row of dots for every normal row, knowing that the image will not feather and cause the dots to blur together.

Inkjet printers are capable of printing on a variety of media. Commercial inkjet printers sometimes spray directly on an item like the label on a beer bottle. For consumer use, there are a number of specialty papers, ranging from adhesive-backed labels or stickers to business cards and brochures. You can even get iron-on transfers that allow you to create an image and put it on a T-shirt! One thing is for certain, inkjet printers definitely provide an easy and affordable way to unleash your creativity.

 

How SCSI Works

by Jeff Tyson

 

 

Most home and small-office PCs use an IDE hard drive and have a PCI bus for adding components to the computer. But a lot of computers, particularly high-end workstations and older Apple Macintoshes, use the Small Computer System Interface (SCSI) bus to connect components, which may include:


SCSI devices usually connect to a controller card like this one.

Basically, SCSI (pronounced "scuzzy") is a fast communications bus that allows you to connect multiple devices to your computer. In this edition of HowStuffWorks, you'll learn about the structure of SCSI and the various specifications and types, as well as SCSI IDs and termination.

SCSI Basics
SCSI is based on an older, proprietary bus interface called Shugart Associates System Interface (SASI). SASI was originally developed in 1981 by Shugart Associates in conjunction with NCR Corporation. In 1986, a modified version of SASI that provided a beefier, open system was ratified by the American National Standards Institute (ANSI) as SCSI.

There are several benefits of SCSI:

  • It's fast -- up to 160 megabytes per second (MBps).
  • It's reliable.
  • It allows you to put multiple devices on one bus.
  • It works on most computer systems.

There are also some potential problems when using SCSI:

  • It must be configured for a specific computer.
  • It has limited system BIOS support.
  • Its variations (speeds, connectors) can be bewildering.
  • There is no common software interface.


Some computers have a built-in SCSI controller, but most require an SCSI host-adapter card.

People are often confused by the different types of SCSI. You'll hear terms such as "Ultra," "Fast" and "Wide" used a lot, and sometimes in combinations. In the next section, you'll find out about the SCSI variations.

SCSI Types
There are really only three basic specifications of SCSI:

  • SCSI-1: The original specification developed in 1986
  • SCSI-2: An update that became an official standard in 1994, a key component of SCSI-2 was the inclusion of the Common Command Set (CCS) -- the 18 commands considered an absolute necessity for support of any SCSI device. You also had the option to double the clock speed from 5 MHz (million cycles per second) to 10 MHz (Fast SCSI), double the bus width from 8 bits to 16 bits and increase the number of devices to 15 (Wide SCSI), or do both (Fast/Wide SCSI). Finally, SCSI-2 added command queuing, which means that an SCSI-2 device can store a series of commands from the host computer and determine which ones should be given priority.
  • SCSI-3: Quickly on the heels of SCSI-2 came SCSI-3, debuting in 1995. The interesting thing about SCSI-3 is that a series of smaller standards have been built within its overall scope. Because of this continually evolving series, SCSI-3 is not considered to be a completely approved standard. Instead, some of the specifications developed within it have been officially adopted. These standards are based on variations of the SCSI Parallel Interface (SPI), which is the way that SCSI devices communicate with each other. Most SCSI-3 specifications begin with the term "Ultra" (Ultra for SPI variations, Ultra2 for SPI-2 variations and Ultra3 for SPI-3 variations). The Fast and Wide designations work just like their SCSI-2 counterparts, with the Fast designation meaning that the clock speed is double that of the base version, and the Wide designation meaning that the bus width is double that of the base.

The chart below shows a comparison of the many SCSI variations:

Name

Specification

# of Devices

Bus Width

Bus Speed

MBps

Asynchronous
SCSI

SCSI-1

8

8 bits

5 MHz

4 MBps

Synchronous
SCSI

SCSI-1

8

8 bits

5 MHz

5 MBps

Wide
SCSI

SCSI-2

16

16 bits

5 MHz

10 MBps

Fast
SCSI

SCSI-2

8

8 bits

10 MHz

10 MBps

Fast/Wide
SCSI

SCSI-2

16

16 bits

10 MHz

20 MBps

Ultra
SCSI

SCSI-3
SPI

8

8 bits

20 MHz

20 MBps

Ultra/Wide
SCSI

SCSI-3
SPI

8

16 bits

20 MHz

40 MBps

Ultra2
SCSI

SCSI-3
SPI-2

8

8 bits

40 MHz

40 MBps

Ultra2/Wide
SCSI

SCSI-3
SPI-2

16

16 bits

40 MHz

80 MBps

Ultra3
SCSI

SCSI-3
SPI-3

16

16 bits

40 MHz

160 MBps

You will notice that the third column shows the number of devices that can be connected on the SCSI bus. In the next section, you'll learn more about SCSI devices and their IDs.

Identifiers
There are three components in any SCSI system:

  • Controller
  • Device
  • Cable

The controller is the heart of SCSI. It serves as the interface between all of the other devices on the SCSI bus and the computer. Also called a host adapter, the controller can be a card that you plug into an available slot or it can be built right into the motherboard.

On the controller is the SCSI BIOS. This is a small ROM or Flash memory chip that contains the software needed to access and control the devices on the SCSI bus.

Usually, each device on the SCSI bus has a built-in SCSI adapter that allows it to interface and communicate with the SCSI bus. For example, an SCSI hard drive will have a small circuit board that combines a controller for the drive mechanism and an adapter for the SCSI bus. Devices with an adapter built in are called embedded SCSI devices.

Each SCSI device must have a unique identifier (ID). As you saw in the previous section, an SCSI bus can support eight or 16 devices, depending on the specification. For an eight-device bus, the IDs range from zero to 7, and for a 16-device bus, they range from zero to 15. One of the IDs, typically the highest one, has to be used by the SCSI controller, which leaves you capable of adding seven or 15 other devices.

With most SCSI devices, there is a hardware setting to configure the device ID. Some devices allow you to set the ID through software, while most Plug and Play SCSI cards will auto-select an ID based on what's available. This auto-selection is called SCSI Configured Automatically (SCAM). It is very important that each device on an SCSI bus have a unique ID, or you will have problems.


Internal SCSI devices connect to a 50-pin ribbon cable.

All of the variations in the SCSI specifications have added another wrinkle: There are at least seven different SCSI connectors, some of which may not be compatible with a particular version of SCSI. The connectors are:

  • DB-25 (SCSI-1)
  • 50-pin internal ribbon (SCSI-1, SCSI-2, SCSI-3)
  • 50-pin Alternative 2 Centronics (SCSI-1)
  • 50-pin Alternative 1 high density (SCSI-2)
  • 68-pin B-cable high density (SCSI-2)
  • 68-pin Alternative 3 (SCSI-3)
  • 80-pin Alternative 4 (SCSI-2, SCSI-3)


DB-25 SCSI connector

 


68-pin Alternative 3 SCSI connector

 


50-pin Centronics SCSI connector

No matter which version of SCSI you are using, or what type of connector it has, one thing is consistent -- the SCSI bus has to be terminated.

Termination
Termination simply means that each end of the SCSI bus is closed, using a resistor circuit. If the bus were left open, electrical signals sent down the bus could reflect back and interfere with communication between SCSI devices and the SCSI controller. Only two terminators are used, one for each end of the SCSI bus. If there is only one series of devices (internal or external), then the SCSI controller is one point of termination and the last device in the series is the other one. If there are both internal and external devices, then the last device on each series must be terminated.

Types of SCSI termination can be grouped into two main categories: passive and active. Passive termination is typically used for SCSI systems that run at the standard bus clock speed and have a short distance, less than 3 feet (1 m), between the devices and the SCSI controller. Active termination is used for Fast SCSI systems or systems with devices that are more than 3 ft (1 m) from the SCSI controller.


Some SCSI terminators are built into the SCSI device, while others may require an external terminator like this one.

Another factor in the type of termination is the bus type itself. SCSI employs three distinct types of bus signaling. Signal ling is the way that the electrical impulses are sent across the wires.

  • Single-ended (SE) - The most common form of signaling for PCs, single-ended signaling means that the controller generates the signal and pushes it out to all devices on the bus over a single data line. Each device acts as a ground. Consequently, the signal quickly begins to degrade, which limits SE SCSI to a maximum of about 10 ft (3 m).
  • High-voltage differential (HVD) - The preferred method of bus signaling for servers, HVD uses a tandem approach to signaling, with a data high line and a data low line. Each device on the SCSI bus has a signal transceiver. When the controller communicates with the device, devices along the bus receive the signal and retransmit it until it reaches the target device. This allows for much greater distances between the controller and the device, up to 80 ft (25 m).
  • Low-voltage differential (LVD) - A variation on the HVD signaling method, LVD works in much the same way. The big difference is that the transceivers are smaller and built into the SCSI adapter of each device. This makes LVD SCSI devices more affordable and allows LVD to use less electricity to communicate. The downside to LVD is that the maximum distance is half of HVD -- 40 ft (12 m).


An active terminator

Both HVD and LVD normally use passive terminators, even though the distance between devices and the controller can be much greater than 3 ft (1 m). This is because the transceivers ensure that the signal is strong from one end of the bus to the other.

SCSI "Network"
SCSI devices inside the computer (internal) attach to the SCSI controller via a ribbon cable. The ribbon cable has a single connector at each end and may have one or more connectors along its length. Each internal SCSI device has a single SCSI connector.


Internal SCSI devices connect to a ribbon cable.

SCSI devices outside the computer (external) attach to the SCSI controller using a thick, round cable.


External SCSI devices connect using thick, round cables.

You have already read about the different connectors used on these external cables. The cable itself typically consists of three layers:

  • Inner layer - This is the most protected layer. It contains the actual data being sent.
  • Media layer - The middle layer contains the wires that send control commands to the device.
  • Outer layer - This layer includes the wires that carry parity information, which ensures that the data is correct.

External devices connect to the SCSI bus in a daisy chain, which refers to the method of connecting each device to the next one in line. External SCSI devices typically have two SCSI connectors -- one is used to connect to the previous device in the chain, and the other is used to connect to the next device in the chain.

A good way to think of SCSI is as a tiny local area network (LAN). The SCSI controller is like the network router, and each SCSI device is like a computer on the network. The SCSI adapter built into each device is comparable to the Ethernet card in a computer. Without the adapter, the device can't communicate with the rest of the network. And just as the router in a LAN is used to connect the network to the outside world, the SCSI controller connects the SCSI network to the rest of the computer.

RAID
For general consumer use, SCSI has not achieved the same mass appeal as IDE. The expectation regarding SCSI was that the ability to add a large number of devices would outweigh the complexity of the interface. But that was before alternative technologies like Universal Serial Bus (USB) and FireWire (IEEE 1394) came into play.

In fact, the only mainstream desktop computer standardized on SCSI was the Apple Macintosh, and that was because of a design mistake. The original Mac was a closed system, which means that there were no expansion slots or other means to easily add extra components. As the Mac grew in popularity, users began to clamor for some way to upgrade their system. Apple decided to add a built-in SCSI controller with an external SCSI port as a way to enable expansion of the system. Until recently, virtually every Mac has contained onboard SCSI. But with the rise of USB and Firewire, Apple has finally removed SCSI as a standard feature on most of its systems.

Where you commonly see SCSI is on servers and workstation computers. The main reason for this is RAID. Redundant array of independent disks (RAID) uses a series of hard drives to increase performance, provide fault tolerance or both. The hard drives are connected together and treated as a single logical entity. Basically, this means that the computer sees the series of drives as one big drive, which can be formatted and partitioned just like a normal drive.

Performance is enhanced because of striping, which means that more than one hard drive can be writing or reading information at the same time. The SCSI RAID controller determines which drive gets which chunk of data and sends the appropriate data to the appropriate drive. While that drive is writing the data, the controller sends another chunk of data to the next drive or reads a chunk of data from another drive. Simultaneous data transfers allow for faster performance.

Fault tolerance, the ability to maintain data integrity in the event of a crash or failure, is achieved in a couple of ways. The first is called mirroring. Basically, mirroring makes an exact duplicate of the data stored on one hard drive to a second hard drive. A RAID controller can be set to automatically send two hard drives the exact same data. To avoid potential complications, both drives should be exactly the same size. Mirroring can be an expensive type of fault tolerance since it requires that you have twice as much storage space as you have data.

The more popular method of fault tolerance is parity. Parity requires a minimum of three hard drives, but will work with several more. What happens is that data is written sequentially to each drive in the series, except the last one. The last drive stores a number that represents the sum of the data on the other drives. For more information on RAID and fault tolerance, check out this page.


Illustration of the basic principle of fault tolerance using parity

Digital video is another prime example of the right time to use SCSI. Because of the demanding storage and speed requirements of full-motion, uncompressed video, most video workstations use a SCSI RAID with extremely fast SCSI hard drives.

As you can see, SCSI is probably going to be around for some time. Whether it's right for you depends on your needs and applications. Be sure to check out the links on the next page to learn more about SCSI.

 

 

 

 

 

 

 

 

 

 

 

 

Main > Computer > Peripherals


Click here to go back to the normal view!

 

How Scanners Work

by Jeff Tyson

 

 

Scanners have become an important part of the home office over the last few years. Scanner technology is everywhere and used in many ways:

  • Flatbed scanners, also called desktop scanners, are the most versatile and commonly used scanners. In fact, this article will focus on the technology as it relates to flatbed scanners.
  • Sheet-fed scanners are similar to flatbed scanners except the document is moved and the scan head is immobile. A sheet-fed scanner looks a lot like a small portable printer.
  • Handheld scanners use the same basic technology as a flatbed scanner, but rely on the user to move them instead of a motorized belt. This type of scanner typically does not provide good image quality. However, it can be useful for quickly capturing text.
  • Drum scanners are used by the publishing industry to capture incredibly detailed images. They use a technology called a photomultiplier tube (PMT). In PMT, the document to be scanned is mounted on a glass cylinder. At the center of the cylinder is a sensor that splits light bounced from the document into three beams. Each beam is sent through a color filter into a photomultiplier tube where the light is changed into an electrical signal.


Microtek's Scanmaker flatbed scanner

The basic principle of a scanner is to analyze an image and process it in some way. Image and text capture (optical character recognition or OCR) allow you to save information to a file on your computer. You can then alter or enhance the image, print it out or use it on your Web page.

In this article, we'll be focusing on flatbed scanners, but the basic principles apply to most other scanner technologies. You will learn about the different types of scanners, how the scanning mechanism works and what TWAIN means. You will also learn about resolution, interpolation and bit depth.

On the next page, you will learn about the various parts of a flatbed scanner.

Anatomy of a Scanner
Parts of a typical flatbed scanner include:

  • Charge-coupled device (CCD) array
  • Mirrors
  • Scan head
  • Glass plate
  • Lamp
  • Lens
  • Cover
  • Filters
  • Stepper motor
  • Stabilizer bar
  • Belt
  • Power supply
  • Interface port(s)
  • Control circuitry


Close-up of the CCD array

The core component of the scanner is the CCD array. CCD is the most common technology for image capture in scanners. CCD is a collection of tiny light-sensitive diodes, which convert photons (light) into electrons (electrical charge). These diodes are called photosites. In a nutshell, each photosite is sensitive to light -- the brighter the light that hits a single photosite, the greater the electrical charge that will accumulate at that site.


Photons hitting a photosite and creating electrons

The image of the document that you scan reaches the CCD array through a series of mirrors, filters and lenses. The exact configuration of these components will depend on the model of scanner, but the basics are pretty much the same.

On the next page, you will see just how all the pieces of the scanner work together.

The Scanning Process
Here are the steps that a scanner goes through when it scans a document:

  • The document is placed on the glass plate and the cover is closed. The inside of the cover in most scanners is flat white, although a few are black. The cover provides a uniform background that the scanner software can use as a reference point for determining the size of the document being scanned. Most flatbed scanners allow the cover to be removed for scanning a bulky object, such as a page in a thick book.


In the image above, you can see the fluorescent lamp on top of the scan head.