-+-+-+-+-+-+-+-+-+-+-+-

Type                             Bandwidth (MBps)

RS-232                           0.0024
Parallel Port (unidirectional)   0.005   - .015
Serial Port (16550 UART)         0.0144
Parallel Port (bidirectional)    0.01    - .0375
Serial Port (16650/16750 UART)   0.02875 - .0575
T1                               0.193
MIL-STD-1553                     1.0
USB 1.1                          1.5
Parallel Port (EPP)              2
ISA (8-bit)                      2.39
PCMCIA                           2.5
IDE                              4
SCSI                             5
T3                               5.404
ISA (16-bit)                     8
PC Card (PCMCIA ISA)             8
SCSI wide or SCSI-2              10
OC-3                             19.375
SCSI-2 wide or SCSI-3            20
EISA                             33
SCSI-3 wide or SCSI ultra-2      40
IEEE 1394a                       50
USB 2.0                          60
OC12                             77
SCSI ultra-2 wide                80
IEEE 1394b                       100
Gigabit Ethernet                 125
ATA 133                          133
PCI (32-bit)                     133
Cardbus (PCMCIA PCI)             133
Serial ATA                       150
Fast Page RAM                    200
PCI Express                      250
AGP                              266
Serial ATA II                    300
OC48                             312
EDO RAM                          320
PCI 2.2 (64-bit, 66MHz)          528
PC66 SDRAM                       528
4Gbps Fibre Channel              532
AGP 2x                           533
Ultra 640 SCSI                   640
PC100 SDRAM                      800
AGP 4x                           1066
PCI-X (64-bit, 133MHz)           1066
PC133 SDRAM                      1100
OC192                            1200
10-Gigabit Ethernet              1250
10Gbps Fibre Channel/iSCSI       1250
RapidIO                          1250
Hypertransport 1.0               1600
AGP 8x                           2133
PC3200/DDR400 SDRAM              3200
Infiniband 12x                   3750
PCI Express 16x                  4000
PC4200/DDR533 SDRAM              4200
OC768                            4976
Hypertransport 2.0 (32-bit)      5000
RAMBUS XDR RAM                   6400

Measurements ISA EISA Micro Channel PCI
Speed 8 MHz 8.3 MHz 10 MHz 33 MHz
Bus width 16-bit 32-bit 32-bit 32-bit
Theoretical bandwidth 16 MBps 33 MBps 40 MBps 133 MBps
Achievable bandwidth 6-8 MBps 10-25 MBps 20-35 MBps 50-80 MBps
Table from: http://mars.cc.edu/csc420/team1/files/busses/Bus_Paper_Final.doc

-+-+-+-+-+-+-+-+-+-+-+-

More bus info from PCTechGuide:

INTERCONNECT STANDARDS

ISA bus

When it appeared on the first PC the 8-bit ISA bus ran at a modest 4.77MHz - the same speed as the processor. It was improved over the years, eventually becoming the Industry Standard Architecture (ISA) bus in 1982 with the advent of the IBM PC/AT using the Intel 80286 processor and 16-bit data bus. At this stage it kept up with the speed of the system bus, first at 6MHz and later at 8MHz.

The ISA bus specifies a 16-bit connection driven by an 8MHz clock, which seems primitive compared with the speed of today's processors. It has a theoretical data transfer rate of up to 16 MBps. Functionally, this rate would reduce by a half to 8 MBps since one bus cycle is required for addressing and a further bus cycle for the 16-bits of data. In the real world it is capable of more like 5 MBps - still sufficient for many peripherals - and the huge number of ISA expansion cards ensured its continued presence into the late 1990s.

As processors became faster and gained wider data paths, the basic ISA design wasn't able to change to keep pace. As recently as the late 1990s most ISA cards remained as 8-bit technology. The few types with 16-bit data paths - hard disk controllers, graphics adapters and some network adapters - are constrained by the low throughput levels of the ISA bus, and these processes can be better handled by expansion cards in faster bus slots. ISA's death-knell was sounded in the PC99 System Design Guide, co-written by the omnipotent Intel and Microsoft. This categorically required the removal of ISA slots, making its survival into the next millennium highly unlikely.

Indeed, there are areas where a higher transfer rate than ISA could support was essential. High resolution graphic displays need massive amounts of data, particularly to display animation or full-motion video. Modern hard disks and network interfaces are certainly capable of higher rates.

The first attempt to establish a new standard was the Micro Channel Architecture (MCA), introduced by IBM. This was closely followed by Extended ISA (EISA), developed by a consortium made up of IBM's major competitors. Although these systems both operate at clock rates of 10MHz and 8MHz respectively, they are both 32-bit and capable of transfer rates well over 20 MBps. As its name suggests, an EISA slot can also take a conventional ISA card. However, MCA is not compatible with ISA at all.

Neither system flourished, largely because they were too expensive to merit support on all but the most powerful file servers.

PCI bus

Intel's original work on the PCI standard was published as revision 1.0 and handed over to a separate organisation, the PCI SIG (Special Interest Group). The SIG produced the PCI Local Bus Revision 2.0 specification in May 1993: it took in the engineering requests from members, and gave a complete component and expansion connector definition, something which could be used to produce production- ready systems based on 5 volt technology. Beyond the need for performance, PCI sought to make expansion easier to implement by offering plug and play (PnP) hardware - a system that enables the PC to adjust automatically to new cards as they are plugged in, obviating the need to check jumper settings and interrupt levels. Windows-95, launched in the summer of that year, provided operating system software support for plug and play and all current motherboards incorporate BIOSes which are designed to specifically work with the PnP capabilities it provides. By 1994 PCI was established as the dominant Local Bus standard.

While the VL-Bus was essentially an extension of the bus, or path, the CPU uses to access main memory, PCI is a separate bus isolated from the CPU, but having access to main memory. As such, PCI is more robust and higher performance than VL-Bus and, unlike the latter which was designed to run at system bus speeds, the PCI bus links to the system bus through special "bridge" circuitry and runs at a fixed speed, regardless of the processor clock. PCI is limited to five connectors, although each can be replaced by two devices built into the motherboard. It is also possible for a processor to support more than one bridge chip. It is more tightly specified than VL-Bus and offers a number of additional features. In particular, it can support cards running from both 5-volt and 3.3-volt supplies using different "key slots" to prevent the wrong card being put in the wrong slot.

In its original implementation PCI ran at 33MHz. This was raised to 66MHz by the later PCI 2.1 specification, effectively doubling the theoretical throughput to 266 MBps - 33 times faster than the ISA bus. It can be configured both as a 32-bit and a 64-bit bus, and both 32-bit and 64-bit cards can be used in either. 64-bit implementations running at 66MHz - still rare by mid-1999 - increase bandwidth to a theoretical 524 MBps. PCI is also much smarter than its ISA predecessor, allowing interrupt requests (IRQs) to be shared. This is useful because well-featured, high-end systems can quickly run out of IRQs. Also, PCI bus mastering reduces latency and results in improved system speeds.

Since mid-1995 the main performance-critical components of the PC have communicated with each other across the PCI bus. The most common PCI devices are the disk and graphics controllers, which are either mounted directly onto the motherboard or on expansion cards in PCI slots.

AGP

As fast and wide as the PCI bus was, there was one task that threatened to consume all its bandwidth: displaying graphics. Early in the era of the ISA bus, monitors were driven by simple Monochrome Display adapter (MDA) and Colour Graphics Array (CGA) cards. A CGA graphics display could show four colours (two bits of data) at 320 by 200 pixels screen resolution at 60Hz, which required 128,000 bits of data per screen, or just over 937 KBps. An XGA image at a 16-bit colour depth requires 1.5MB of data for every image, and at a vertical refresh rate of 75Hz, this amount of data is required 75 times each second. Thanks to modern graphics adapters, not all of this data has to be transferred across the expansion bus, but 3D imaging technology created new problems.

3D graphics have made it possible to model both fantastic and realistic worlds on-screen in enormous detail. Texture mapping and object hiding require huge amounts of data, and the graphics adapter needs to have fast access to this data to avoid the frame rate dropping and action appearing jerky. It was beginning to look as though the PCI peak bandwidth of 132 MBps was not up to the job.

Intel's solution was to develop the Accelerated Graphics Port (AGP) as a separate connector that operates off the processor bus. The AGP chipset acts as the intermediary between the processor and Level 2 cache contained in the Pentium II's Single Edge Contact Cartridge, the system memory, the graphics card and the PCI bus. This is called Quad Port acceleration.

AGP operates at the speed of the processor bus, now known as the frontside bus. At a clock rate of 66MHz this is double the PCI clock speed and means that the peak base throughput is 264 MBps.

For graphics cards specifically designed to support it, AGP allows data to be sent during both the up and down clock cycle, doubling the clock rate to 133MHz and peak transfer to 528 MBps. This is known as 2x. To improve the length of time that AGP can maintain this peak transfer, the bus supports pipelining, which is another improvement over PCI. A pipelining 2x graphics card will be able to sustain throughput at 80% of the peak. AGP also supports queuing of up to 32 commands via a process called Sideband Addressing (SBA), the commands being sent while data is being received. This allows the bus to sustain peak performance for 95% of the time, according to Intel.

AGP's four-fold bandwidth improvement and graphics-only nature ensures that large transfers of 3D graphics data don't slow up the action on screen; nor will graphics data transfers be interrupted by other PCI devices. Being primarily intended to boost 3D performance, AGP also provides other improvements that are specifically aimed at this function.

With its increased access speed to system memory over the PCI bus, AGP can use system memory as if it's actually on the graphics card. This is called Direct Memory Execute (DIME). A device called a Graphics Aperture Remapping Table (GART) handles the RAM addresses so that they can be distributed in small chunks throughout system memory rather than hijacking one large section, and presents them to a DIME-enabled graphics card as if they're part of on-board memory. The main use for DIME is to allow much larger textures to be used because the graphics card can have a much larger memory space in which to load the bitmaps used.

AGP was initially only available in Pentium II systems based on Intel's 440LX chipset. However, despite no Intel support (and therefore thanks to the efforts of other chipset manufacturers such as VIA), it had also found its way onto motherboards designed for Pentium-class processors by early 1998.

Intel's release of version 2.0 of the AGP specification, combined with the AGP Pro extensions to this specification, mark an attempt to have AGP taken seriously in the 3D graphics workstation market. AGP 2.0 defines a new 4x-transfer mode that allows four data transfers per clock cycle on the 66MHz AGP interface. This delivers a maximum theoretical bandwidth between the AGP device and system memory of 1.0 GBps. The new 4x mode has a much higher potential throughput than 100MHz SDRAM (800 MBps), so the full benefit wasn't seen until the implementation of 133MHz SDRAM and Direct Rambus DRAM (DRDRAM) in the second half of 1999. AGP 2.0 was supported by chipsets launched early in 1999 to provide support for Intel's Katmai processor.

AGP Pro is a physical specification aimed at satisfying the needs of high-end graphics card manufacturers, who are currently limited by the maximum electrical power that can be drawn by an AGP card (about 25W). AGP Pro caters for cards that draw up to 100W, and will use a slightly longer AGP slot that will also take current AGP cards.

PCI-X

PCI-X v1.0, a high performance addendum to the PCI Local Bus specification co-developed by IBM, Hewlett-Packard, and Compaq - normally competitors in the PC server market - was unanimously approved by the Peripheral Component Interconnect Special Interest Group (PCI SIG) in the autumn of 1999. Fully backward compatible with standard PCI, PCI-X was seen as an immediate solution to the increased I/O requirements for high-bandwidth enterprise applications such as Gigabit Ethernet, Fibre Channel, Ultra3 SCSI and high-performance graphics.

PCI-X not only increases the speed of the PCI bus but also the number of high-speed slots. With the current design, PCI slots run at 33MHz and one slot can run at 66 MHz. PCI-X doubles the current performance of standard PCI, supporting one 64-bit slot at 133MHz, for an aggregate throughput of 1 GBps. The new specification also features an enhanced protocol to increase the efficiency of data transfer and to simplify electrical timing requirements, an important factor at higher clock frequencies.

For all its performance gains, PCI-X was positioned as an interim technology while the same three vendors develop a more long-term I/O bus architecture, referred to as Future I/O. While of potential use throughout the entire computer industry, the initial application of PCI-X was expected to be in server and workstation products, embedded systems and data communication environments.

The symbolism of a cartel of manufacturers making architectural changes to the PC server without consulting Intel is seen as being a significant development. At the heart of the dispute is who gets control over future server I/O technology. The PCI-X faction - already wary of Intel's growing dominance in the hardware business - hoped to wrest some control by developing and defining the next generation of I/O standards, which they hope Intel will eventually support. Whether this would succeed - or merely generate a standards war - was a moot point since the immediate effect was merely to provoke Intel into leading another group of vendors in the development of rival I/O technology, which they referred to as "Next Generation I/O" (NGIO).

In 2002 PCI-X 2.0 emerged, initially doubling and ultimately promising to quadruple the speed of PCI-X. It's longevity contributed to the path to PCI's eventual successor being a bumpy one.

STORAGE BUS ARCHITECTURES

IDE

One of the earliest and most significant standards introduced into PC hardware was IDE (Integrated Drive Electronics), a standard which controls the flow of data between the processor and the hard disk. The IDE concept was initially proposed by Western Digital and Compaq in 1986 to overcome the performance limitations of earlier subsystem standards like ST506 and ESDI. The term IDE itself is not an actual hardware standard, but the proposals were incorporated into an industry-agreed interface specification known as ATA (AT Attachment). The parallel ATA standard evolved from the original IBM Advanced Technology (AT) interface and defines a command and register set for the interface, creating a universal standard for communication between the drive unit and the PC.

One of the major innovations introduced by IDE was the integration of the disk controller functions onto the disk drive itself. The separation of the controller logic from the interface made it possible for drive manufacturers to enhance the performance of their drives independently - there were no performance-boosting features incorporated into the ATA interface itself. IDE drives connect straight to the system bus with no need for a separate controller on the bus, thereby reducing overall cost.

The mass acceptance of the IDE standard hinged on its ability to serve the needs of the market in terms of two important criteria: cost and compatibility. Over the years, these two factors have been more significant to mainstream PC users than high performance and as a result IDE rapidly became established as a mass market standard.

Since the implementation of the ATA standard, the PC has changed dramatically. The IDE specification was designed to support two internal hard disks, each with a maximum capacity of 528MB, and in 1986 this upper limitation seemed to be beyond all imaginable requirements for PC users. But within ten years, faster processors and new local bus technology (VLB and PCI) were introduced, and this combined with increasingly demanding software made the IDE interface into a performance bottleneck.

EIDE

In 1993 Western Digital brought EIDE (Enhanced IDE) onto the market. EIDE is a standard designed to overcome the constraints of ATA while at the same time maintaining backward compatibility. EIDE supports faster data transfer rates - with Fast ATA capable of burst rates up to a 16.6 MBps - and higher disk capacities, up to 137GB since mid-1998, when the previous 8.4GB limit was raised.

The four possible devices on an EIDE system are handled by two channels. Each channel supports two devices in a master/slave configuration. The primary port is generally connected to a local bus (for example, PCI), and this is set to the same address and IRQ setting as it was on the standard IDE system. This ensures backward compatibility with IDE systems and prevents conflicts which would otherwise crop up with operating system software, or other software which communicates with an IDE device. The old IDE system must be set up to cope with the enhancements in EIDE (higher performance and increased hard disk capacity) and this is enabled by additional software.

When the host needs data to be either read or written, the operating system first determines where the data is located on the hard drive - the head number, cylinder, and sector identification. The operating system then passes the command and address information to the disk controller, which positions the read/write heads over the right track. As the disk rotates, the appropriate head reads the address of each sector on the track. When the desired sector appears under the read/write head, the necessary data is read into the cache buffer, usually in 4K blocks. Finally, the hard drive interface chip sends the data to the host.

The ability to support non-disk peripherals such as CD-ROM drives and tape drives was made possible by the ATAPI (AT Attachment Packet Interface) specification, defined by Western Digital. The ATAPI extension of the ATA protocol defines a single command set and single register set allowing other devices to share the ATA bus with traditional ATA HDDs. It includes several commands which are specific to CD-ROM devices, including the Read CD command group as well as a CD speed-select command.

In addition to ATAPI, EIDE supports transfer standards developed by the ATA Committee. The Programmed Input/Output (PIO) modes are a range of protocols for a drive and IDE controller to exchange data at different rates which define specifications for the CPU's involvement in data transfer between the hard drive and memory. Many drives also support Direct Memory Access (DMA) operation as an alternative protocol to PIO modes. This is where the drive takes over the bus (bus mastering) and transfers data directly to system memory. This is better for multitasking PCs as the CPU can do other things while data transfer occurs, although its only in systems using Triton HX/VX or later chipsets that the CPU can use the memory or ISA buses while the PCI bus is in use. An OS device driver is needed for DMA, and a system's BIOS must also support these specifications to take advantage of them.

The hard drive industry subsequently adopted a number of approaches to enhance performance further. The first was to enlarge drive capacity. This was accomplished by making the tracks on the disk closer together (track density) and the data written on each track more dense (linear density). By making more data available during each rotation internal data transfer rates were effectively increased. There then followed number of vendor-specific measures to improve data transfer rates further, such as producing higher rpm drives, or modifying the cache buffer algorithms. The ultimate step was to modify the ATA/IDE protocol itself.

The original ATA specification was for connecting drives to the ISA bus and host transfers were limited to 2-3 MBps. The newer ATA-2 or Fast ATA interface connect to a local bus instead and the higher bandwidths available on local bus architectures meant massively improved data throughput. Since systems and drive vendors are allowed to label their products as EIDE even when supporting only a subset of it's specifications, several vendors use the term Fast ATA (AT Attachment) for their EIDE hard drives that support PIO Mode 3 and Multiword Mode 1 DMA, and Fast ATA-2 for drives that support PIO Mode 4 and Multiword Mode 2 DMA.

Ultra ATA

In the second half of 1997 EIDE's 16.6 MBps limit was doubled to 33 MBps by the new Ultra ATA (also referred to as ATA-33 or Ultra DMA mode 2 protocol). As well as increasing the data transfer rate, Ultra ATA also improved data integrity by using a data transfer error detection code called Cyclical Redundancy Check (CRC).

The original ATA interface is based on transistor-transistor logic (TTL) bus interface technology, which is in turn based on the old industry standard architecture (ISA) bus protocol. This protocol uses an asynchronous data transfer method. Both data and command signals are sent along a signal pulse called a strobe, but the data and command signals are not interconnected. Only one type of signal (data or command) can be sent at a time, meaning a data request must be completed before a command or other type of signal can be sent along the same strobe.

Starting with ATA-2 the more efficient synchronous method of data transfer is used. In synchronous mode, the drive controls the strobe and synchronises the data and command signals with the rising edge of each pulse. Synchronous data transfers interpret the rising edge of the strobe as a signal separator. Each pulse of the strobe can carry a data or command signal, allowing data and commands to be interspersed along the strobe. To get improved performance in this environment, it is logical to increase the strobe rate. A faster strobe means faster data transfer, but as the strobe rate increases, the system becomes increasingly sensitive to electro-magnetic interference (EMI, also known as signal interference or noise) which can cause data corruption and transfer errors. ATA-2 includes PIO mode 4 or DMA Mode 2 which, with the advent of the Intel Triton chipset in 1994, allowed support for a higher data transfer rate of 16.6 MBps.

ATA-3 added the Self-Monitoring Analysis and Reporting Technology (SMART) feature, which resulted in more reliable hard drives.

ATA-4 includes Ultra ATA which, in an effort to avoid EMI, makes the most of existing strobe rates by using both the rising and falling edges of the strobe as signal separators. Thus twice as much data is transferred at the same strobe rate in the same time period. While ATA-2 and ATA-3 transfer data at burst rates up to 16.6 Mbytes per second, Ultra ATA provides burst transfer rates up to 33.3 MBps. The ATA-4 specification adds Ultra DMA mode 2 (33.3 MBps) to the previous PIO modes 0-4 and traditional DMA modes 0-2. The Cyclical Redundancy Check (CRC) implemented by Ultra DMA was new to ATA. The CRC value is calculated on a per-burst basis by both the host and the HDD controller, and is stored in their respective CRC registers. At the end of each burst, the host sends the contents of its CRC register to the HDD controller, which compares the host's value against its own. If the HDD controller reports an error to the host, the host retries the command that produced the CRC error.

ATA-4 also provided for the integration of the AT Attachment Program Interface (ATAPI) standard. Up until this time ATAPI - which provides a common interface for CD-ROM drives, tape backup drives and other removable storage drives - had been a separate standard.

ATA-5 includes Ultra ATA/66 which doubles the Ultra ATA burst transfer rate by reducing setup times and increasing the strobe rate. The faster strobe rate increases EMI, which cannot be eliminated by the standard 40-pin cable used by ATA and Ultra ATA. To eliminate this increase in EMI, a new 40-pin, 80-conductor cable was developed. This cable adds 40 additional grounds lines between each of the original 40 ground and signal lines. The additional 40 lines help shield the signal from EMI. The new connector remains plug-compatible with existing 40-pin headers and Ultra ATA/66 hard drives are backward-compatible with Ultra ATA/33 and DMA, and with existing EIDE/IDE hard drives, CD-ROM drives and host systems. The ATA-5 specification introduces new Cyclic Redundancy Check (CRC) error detection code and adds Ultra DMA modes 3 (44.4 MBps) and 4 (66.6 MBps) to the previous PIO modes 0-4, DMA modes 0-2, and Ultra DMA mode 2.

ATA-6 - also referred to as Ultra DMA mode 5 - soon followed. This increased higher burst data transfer rates to a maximum 100 MBps by reducing its signal voltage - and associated timing requirements - from 5V to 3.3V.

Ultra ATA/100 had been expected to be the final generation of Parallel ATA interface before the industry completed its transition to Serial ATA. However, in the event, ATA/133 - also known as UltraDMA 133 - was announced in mid-2001, increasing throughput yet again, this time to 133 MBps.

Serial ATA

In recent years, two alternative serial interface technologies - Universal Serial Bus (USB) and IEEE 1394 - have been proposed as possible replacements for the Parallel ATA interface. However, neither interface has been able to offer the combination of low cost and high performance that has been the key to success of the traditional Parallel ATA interface. However, in spite of its success, the Parallel ATA interface has a long history of design issues. Most of these issues have been successfully overcome or worked around. However, some have persisted, and in 1999 the Serial ATA Working Group - comprising companies including APT Technologies, Dell, IBM, Intel, Maxtor, Quantum, and Seagate Technologies - was formed to begin work on a Serial Advanced Technology Attachment (ATA) storage interface for hard-disk drives and ATA Packet Interface (ATAPI) devices that is expected to replace the current Parallel ATA interface.

Compared with Parallel ATA, Serial ATA will have lower signalling voltages and reduced pin count, will be faster and more robust, and will have a much smaller cable. It will also be completely software compatible with Parallel ATA and provide backward compatibility for legacy Parallel ATA and ATAPI devices. This will be achieved either using chip sets that support Parallel ATA devices in conjunction with discrete components that support Serial ATA devices, or by the use of serial and parallel dongles, which adapt parallel devices to a serial controller or adapt serial devices to a parallel controller.

Serial ATA's primary benefits over Parallel ATA include:

-Reductions in voltage and pin count: Serial ATA's low-voltage requirement (500 mV peak-to-peak) will effectively alleviate the increasingly difficult-to-accommodate 5-volt signalling requirement that hampers the current Parallel ATA interface.
-Smaller, easier-to-route cables: Elimination of the cable-length limitation: The Serial ATA architecture replaces the wide Parallel ATA ribbon cable with a thin, flexible cable that can be up to 1 meter in length. The serial cable is smaller and easier to route inside a PC's chassis and eliminates the need for the large and cumbersome 40-pin connectors required by Parallel ATA. The small-diameter cable also helps improve air flow inside the PC system chassis and will facilitate future designs of smaller PC systems.
-Improved data robustness: Serial ATA will offer more thorough error checking and error correcting capabilities than are currently available with Parallel ATA. The end-to-end integrity of transferred commands and data can be guaranteed across the serial bus.

First-generation Serial ATA began to ship in mid-2002 with support for data transfer rates of up to 150 MBps. Subsequent versions of the specification are expected to increase performance to support data transfer rates of 300 MBps and, later, 600 MBps.

SCSI

As with most specifications in the computer world, the original SCSI (pronounced scuzzy) specification was completed (in 1986) after work had already begun on a better version (SCSI-2). It was developed as a result of attempts by Shugart and NCR to develop a new interface for minicomputers. The basis of the interface was, and still is, the set of commands that control data transfer and communication among devices. The commands were the strength of SCSI, because they made the interface intelligent; but they were also its initial weakness, as there wasn't enough of a standard for the command set to be truly useful to device manufacturers. Consequently, in the mid-1980s, the Common Command Set (CCS) extension was developed to standardise SCSI commands.

SCSI, like EIDE, is a bus which controls the flow of data (I/O) between the computer's processor and it's peripherals, the most common of which is the hard drive. Unlike EIDE, SCSI requires an interface to connect it to a PC's PCI or ISA bus. This isn't a controller: it's correctly called a "host dapter". The actual controllers are built into each SCSI device. They "chain" SCSI peripherals to the SCSI bus via the host adapter.

SCSI's most obvious strength is the number of devices it can control. Whereas IDE interfaces are restricted to two disk drives, and today's EIDE interfaces to four devices, which can include hard disks and CD-ROM drives, a SCSI controller can handle up to eight devices (including the host adapter card, which counts as a device). Furthermore, the device can vary from hard disks and CD-ROM drives, to CD-Rs, optical drives, printers, scanners, media changers, network cards and much more.

Each device on the chain, including the host, must be identified by a unique ID number. One SCSI device must not use the same ID number as another, but they may be numbered non-sequentially. Most SCSI host adapters feature external and internal connectors, with the option for the chain to extend in either or both directions. There's no relationship between the IDs and the physical position on the bus, but both ends must be electrically "terminated" with resistors to prevent signal reflections and guarantee data integrity over long cable lengths. Termination comes in several varieties, from physical jumpers or plugs to software configurations.

Vanilla SCSI supports up to eight devices, using ID numbers 0 to 7. The controlling host adapter traditionally occupies ID 7 and boots the operating system from the device with the lowest ID number. Most SCSI systems set the boot hard drive at ID 0, leaving IDs 1 to 6 free for other non-booting devices. When a SCSI system starts up, all the devices on the bus are listed along with their ID number.

The SCSI host adapter takes up a hardware interrupt request line (IRQ), but the devices attached to the card don't, which significantly increases expandability. In fact, it's possible to add a second SCSI card for seven additional devices. Better still, a "twin-channel" SCSI card takes up only one IRQ and handles up to 15 peripheral devices.

SCSI evolution

SCSI-1, the original 1986 standard, is now obsolete. It used asynchronous transfer, where the host and the device, blind to the other's maximum potential, slowly exchanged 8 bits at a time, offering a bandwidth of 3 MBps. SCSI-1 allowed up to eight devices - the host adapter and up to seven hard disks.

With synchronous transfer, the host and the device together determine the highest rate of transfer they can sustain and stick to it. Work started on SCSI-2 in 1986, the standard finally being approved by the American National Standards Institute (ANSI) in 1994. SCSI-2 featured synchronous transfer, raising the bandwidth to 5 MBps and added specifications for attaching devices other than hard disks, moving it into its role as a multiple-device interface.

SCSI-2 also added two optional speed improvements: doubling the signalling rate to 10MHz (Fast SCSI), and adding a second "P" cable to the SCSI bus, allowing 16-bit or 32-bit data transfers (Wide SCSI). These two options can be used separately or combined in Fast Wide SCSI, capable of a sustained data transfer rate of 20 MBps. Wide SCSI adapters may support up to 16 devices on a single chain, with IDs 0 to 15.

After SCSI-2 things get a little confusing. The SCSI-3 specification, drafted in 1996, splits SCSI into a number of specifications, including:

- the SCSI Parallel interface (SPI), which defines the specification governing the workings of SCSI cables, and
- the SCSI Interlock Protocol (SIP), which sets out the commands for all SCSI devices.

each document having its own revision level.

Importantly, SCSI-3 eliminates the need for a second cable for Fast SCSI or Wide SCSI and adds support for fibre-optic cable. Another major addition is SCAM (SCSI Configuration Auto-Magically), which addresses one of the common complaints about SCSI - that it was difficult to install and configure. A subset of Plug and Play, SCAM allows for self-configuring SCSI devices that select their own ID number, rather than the manual assignment of IDs in SCSI-1 and 2. It also allows autotermination.

UltraSCSI (also known as Fast-20) is an extension of SCSI-2 that doubles the signalling rate of the SPI specification to 20MHz, at the cost of shortening the length of the SCSI bus to 1.5m. In 1998 SPI-2 doubled the speed again to Fast-40 commonly, know as Ultra2 SCSI. By running the bus at 40MHz the 16-bit Wide implementation achieves a theoretical maximum bandwidth of 80 MBps.

The manner in which data is transmitted across a SCSI bus is defined by the method of signalling used. There are three types of SCSI Signaling that can be used: High-voltage differential (HVD), Low-voltage differential (LVD) and Single Ended (SE). HVD and SE have been around since the early SCSI standards, the latter's popularity being largely because of the longer cable lengths it allows. LVD was introduced with the Ultra2 SCSI implementation and, in many ways, represents a compromise between its two predecessors. Using 3 volt instead of the standard 5 volt logic, it has all the advantages of 5V High Voltage Differential, but without the need for expensive transceivers. As well as being much less susceptible to noise interference, LVD allows cable lengths of up to 12m, even when the full 16 devices are attached.

LVD's lower voltage also confers other advantages. The lower voltage and lower current requirements of LVD SCSI drivers means lower heat dissipation. That in turn means that the differential drivers can be included on the LVD SCSI interface ASIC, resulting in an interface with a smaller parts count, lower parts cost, a requirement for less real estate on the PCB and increased reliability.

Announced in late 1999, SPI-3 doubled the speed again to Fast-80. Commonly know as Ultra160 SCSI, this raised throughput to 160 MBps on a wide bus and offered three main improvements over Ultra2 in terms of the technology:

- cyclic redundancy checking (CRC), which checks all transferred data, adding significantly to data integrity
- domain validation, which intelligently verifies system configuration for improved reliability, and
- double transition clocking, which is the main reason for the improved bandwidth.

2001 saw the announcement of Ultra320 SCSI, which built on the improvements realised by Ultra160 SCSI, adding features such as Packet Protocol and Quick Arbitration Select to further improve SCSI performance to 320 MBps.

SCSI is entirely backward compatible, with ancient SCSI-1 devices operating on the latest host adapters. Of course, to exploit the potential of faster, more recent SCSI devices, a matching host adapter is required. Similarly, the fastest host won't speed up an old, slow SCSI device.

SCSI has become the accepted standard for server-based mass storage and the Ultra2 LVD implementation is often seen teamed up with Redundant Array of Independent Disks (RAID) arrays to provide both high speed and high availability. However, its dominance of server storage is coming under increasing pressure from the Fibre Channel standard.

Fibre Channel

The committee charged with developing Fibre Channel technology was established within the American National Standards Institute in 1989. Two years later IBM, Hewlett-Packard Co. and Sun Microsystems Inc. joined forces to create the Fibre Channel Systems Initiative (FCSI), with the objective of ensuring the interoperability between products and to kick-starting the Fibre Channel the market. In 1994 Fibre Channel was accepted as an ANSI standard and a year later the duties of the FCSI were handed over to the larger Fibre Channel Association.

Fibre Channel has revolutionised the way network storage is organised. When first introduced, it operated at speeds no faster than SCSI-3, which meant that its real value in Storage Area Networks (SAN) was the distance benefit, not the speed. Indeed, Fibre Channel's 10,000 metre limit can be extended to 100km using special optic transceivers, giving it a far greater range than SCSI. However, times have changed, and when the 2Gbit/sec version of Fibre Channel was released in 2000, it meant that the technology now outstriped SCSI both in terms of range and performance.

Fibre Channel can be implemented in the form of a continuous arbitrated loop (FCAL) that can have hundreds of separate storage devices and host systems attached, with connection via a high-speed switching fabric (much like a network switch) another option. All this makes it a very flexible and fault-tolerant technology and, by attaching disk arrays and backup devices directly to the loop rather than onto any one server, the technology can be used to construct an independent SAN. That, in turn, allows data to be carried to and from servers and backed up with little or no impact on ordinary network traffic - of real advantage when it comes to data warehousing and other data-intensive client/server applications.

The benefits of SANs are directly related to the increased accessibility and manageability of data offered by the Fibre Channel architecture. Data becomes more accessible when the Fibre Channel fabric scales to encompass hundreds of storage devices and servers. The data is also more available when multiple concurrent transactions can be sent across Fibre Channel's switched architecture. Fibre channel also overcomes distance limitations when Fibre Channel links span hundreds of kilometres or are sent over a WAN.

Fibre Channel hardware interconnects storage devices with servers to form the Fibre Channel fabric. The fabric consists of the physical layer, interconnect devices and translation devices. The physical layer consists of copper and fibre-optic cables that carry Fibre Channel signals between transceiver pairs. Interconnect devices, such as hubs and switches route Fibre Channel frames at gigabit rates. Translation devices - such as host bus adapters, routers, adapters, gateways and bridges - are the intermediaries between Fibre Channel protocols and upper layer protocols such as SCSI, Ethernet and ATM.

With work on a 10 Gbit/sec specification underway, Fibre Channel is expected to continue to expand into the storage markets, which will make use of its benefits over traditional channel technologies such as SCSI. It's combination of performance and range is important to a number of applications, such as multimedia, medical imaging and scientific visualisation. Because of the distances it can cover and the fact that storage devices can be placed remotely, Fibre Channel has significant advantages in disaster recovery situations.

SSA

Today's huge databases and data intensive applications demand incredible amounts of storage, and transferring massive blocks of information requires technology that is robust, reliable and scaleable. Serial Storage Architecture (SSA) is an IBM-developed interface for connecting storage devices, storage subsystems, servers and workstations in mission-critical PC server applications. However, by the start of 1999 it had failed to win major support, and appeared likely to lose out to the rival Fibre Channel standard.

SSA provides data protection for critical applications by helping to ensure that a single cable failure will not prevent access to data. All the components in a typical SSA subsystem are connected by bi-directional cabling. Data sent from the adapter can travel in either direction around the loop to its destination. SSA detects interruptions in the loop and automatically reconfigures the system to help maintain connection while a link is restored.

Up to 192 hot-swappable hard disk drives can be supported per system. Drives are available in 2.25 and 4.51GB capacities, and particular drives can be designated for use by an array in the event of hardware failure. Up to 32 separate RAID arrays can be supported per adapter, and arrays can be mirrored across servers to provide cost-effective protection for critical applications. Furthermore, arrays can be sited up to 25 metres apart - connected by thin, low-cost copper cables - allowing subsystems to be located in secure, convenient locations, far from the server itself.

With its inherent resiliency and ease of use, SSA is being increasingly deployed in server/RAID environments, where it is capable of providing for up to 80 MBps of data throughput, with sustained data rates as high as 60 MBps in non-RAID mode and 35 MBps in RAID mode.

INPUT/OUTPUT STANDARDS

Nearly two decades on, many peripheral devices are still connected into the same serial ports and parallel ports that were present on the very first commercial, and with the exception of the Plug-and-Play standards created as part of Windows 95, the PC's "I/O technology" has changed very little since its invention in 1981. Whilst they may have been adequate for the throughputs required by the peripherals of the day, by the late 1990s the PC's serial and parallel ports fell short of users' needs in a number of important areas:

- Throughput: Serial ports max out at 115.2 Kbit/s, parallel ports (depending on type) at around 500 Kbit/s, but devices such as digital video cameras require vastly more bandwidth
- Ease of use: Connecting devices to legacy ports can be fiddly and messy, especially daisy-chaining parallel port devices through pass-through ports. And the ports are always inconveniently located at the rear of the PC
- Hardware resources: Each port requires its own interrupt request line (IRQ). A PC has a total of 16 IRQ lines, most of which are already spoken for. Some PCs have as few as five free IRQs before peripherals are installed
- Limited number of ports: Most PCs have a pair of COM ports and one parallel port. More COM ports and parallel ports can be added, but at the cost of precious IRQs.

In recent years the field of input/output technology has become one of the most exciting and dynamic areas of innovation in desktop computing today and two emerging serial data standards are about to revolutionise the way that peripheral devices are connected and take the concept of Plug-and-Play to new heights.

They also promise to eliminate much of the fuss and bother involved in connecting devices to computers, including all the spare parts and tangled wires that were so common in the PCs of the past. With these new standards, it will be possible for any user to connect a nearly limitless set of devices to the computer in just a few seconds without the requirement of technical knowledge.

USB

Developed jointly by Compaq, Digital, IBM, Intel, Microsoft, NEC and Northern Telecom, the Universal Serial Bus (USB) standard offers a new standardised connector for attaching all the common I/O devices to a single port, simplifying today's multiplicity of ports and connectors. Significant impetus behind the USB standard was created in September of 1995 with the announcement of a broad industry initiative to create an open host controller interface (HCI) standard for USB. Backed by 25 companies, the aim of this initiative was to make it easier for companies - including PC manufacturers, component vendors and peripheral suppliers - to more quickly develop USB-compliant products. Key to this was the definition of a non-proprietary host interface - left undefined by the USB specification itself - which enabled connection to the USB bus. The first USB specification was published a year later, with version 1.1 being released in the autumn of 1998.

Up to 127 devices can be connected, by daisy-chaining or by using a USB hub which itself has a number of USB sockets and plugs into a PC or other device. Seven peripherals can be attached to each USB hub device. This can include a second hub to which up to another seven peripherals can be connected, and so on. Along with the signal USB carries a 5v power supply so small devices, such as hand held scanners or speakers, do not have to have their own power cable.

Devices are plugged directly into a four-pin socket on the PC or hub using a rectangular Type A socket. All cables that are permanently attached to the device have a Type A plug. Devices that use a separate cable have a square Type B socket, and the cable that connects them has a Type A and Type B plug.

USB 1.1 overcame the speed limitations of UART-based serial ports, running at 12 Mbit/s - at the time, on a par with networking technologies such as Ethernet and Token Ring - and provided more than enough bandwidth for the type of peripheral device is was designed to handle. For example, the bandwidth was capable of supporting devices such as external CD-ROM drives and tape units as well as ISDN and PABX interfaces. It was also sufficient to carry digital audio directly to loudspeakers equipped with digital-to-analogue converters, eliminating the need for a soundcard. However, USB wasn't intended to replace networks. To keep costs down its range is limited to 5 metres between devices. A lower communication rate of 1.5 Mbit/s can be set-up for lower-bit-rate devices like keyboards and mice, saving space for those things which really need it.

USB was designed to be user-friendly and is truly plug-and-play. It eliminates the need to install expansion cards inside the PC and then reconfigure the system. Instead, the bus allows peripherals to be attached, configured, used, and detached while the host and other peripherals are in operation. There's no need to install drivers, figure out which serial or parallel port to choose or worry about IRQ settings, DMA channels and I/O addresses. USB achieves this by managing connected peripherals in a host controller mounted on the PC's motherboard or on a PCI add-in card. The host controller and subsidiary controllers in hubs manage USB peripherals, helping to reduce the load on the PC's CPU time and improving overall system performance. In turn, USB system software installed in the operating system manages the host controller.

Data on the USB flows through a bi-directional pipe regulated by the host controller and by subsidiary hub controllers. An improved version of bus mastering allows portions of the total bus bandwidth to be permanently reserved for specific peripherals, a technique called isochronous data transfer. The USB interface contains two main modules: the Serial Interface Engine (SIE), responsible for the bus protocol, and the Root Hub, used to expand the number of USB ports.

The USB bus distributes 0.5 amps (500 milliamps) of power through each port. Thus, low-power devices that might normally require a separate AC adapter can be powered through the cable - USB lets the PC automatically sense the power that's required and deliver it to the device. Hubs may derive all power from the USB bus (bus powered), or they may be powered from their own AC adapter. Powered hubs with at least 0.5 amps per port provide the most flexibility for future downstream devices. Port switching hubs isolate all ports from each other so that one shorted device will not bring down the others.

The promise of USB was a PC with a single USB port onto which would be connected one large, powered device - like a monitor or a printer - which would act as a hub, linking up all the other smaller devices such as mouse, keyboard, modem, document scanner, digital camera and so on. Since many USB device drivers did not become available until after its release, this promise was never going to be realised before the availability of Windows 98. However, even post-Windows 98 its take-up was initially disappointing.

There were a number of reasons for this. Some had complained that the USB architecture was too complex and that a consequence of having to support so many different types of peripheral was an unwieldy protocol stack. Others argued that the hub concept merely shifts expense and complexity from the system unit to the keyboard or monitor. However, probably the biggest impediment to USB's acceptance was the IEEE 1394 FireWire standard.

Developed by Apple Computer, Texas Instruments and Sony and backed by Microsoft and SCSI specialist Adaptec, amongst others, IEEE 1394 was another high-speed peripheral bus standard. It was supposed to be complementary to USB, rather than an alternative, since it's possible for the two buses to coexist in a single system, in a manner similar to today's parallel and serial ports. However, the fact that digital cameras were far more likely to sport an IEEE 1394 socket than a USB port gave other peripheral manufacturers pause for thought.

IEEE 1394

Also widely referred to as FireWire, IEEE 1394 was approved by the Institute of Electrical and Electronics Engineers (IEEE) in 1995. Originally conceived by Apple, who currently receives $1 royalty per port, several leading IT companies - including Microsoft, Philips, National Semiconductor and Texas Instruments - have since joined the 1394 Trade Association.

IEEE 1394 is similar to the first version of USB in many ways, but much faster. Both are hot-swappable serial interfaces, but IEEE 1394 provides high-bandwidth, high-speed data transfers significantly in excess of what USB offers. There are two levels of interface in IEEE 1394, one for the backplane bus within the computer and another for the point-to-point interface between device and computer on the serial cable. A simple bridge connects the two environments. The backplane bus supports data-transfer speeds of 12.5, 25, or 50 Mbit/s, the cable interface speeds of 100, 200 and 400 Mbit/s - roughly four times as fast as a 100BaseT Ethernet connection and far faster than USB's 1.5 Mbit/s or 12 Mbit/s speeds. A 1394b specification aims to adopt a different coding and data-transfer scheme that will scale to 800 Mbit/s, 1.6 Gbit/s and beyond. Its high-speed capability makes IEEE 1394 viable for connecting digital cameras, camcorders, printers, TVs, network cards and mass storage devices to a PC.

IEEE 1394 cable connectors are constructed with the electrical contacts inside the structure of the connector thus preventing any shock to the user or contamination to the contacts by the user's hands. These connectors are derived from the Nintendo GameBoy connector. Field tested by children of all ages, this small and flexible connector is very durable. These connectors are easy to use even when the user must blindly insert them into the back of machines. There are no terminators required, or manual IDs to be set.

IEEE 1394 uses a six-conductor cable (up to 4.5 metres long) which contains two pairs of wires for data transport, and one pair for device power. The design resembles a standard 10BaseT Ethernet cable. Each signal pair is shielded and the entire cable is shielded. Cable power is specified to be from 8Vdc to 40Vdc at up to 1.5 amps and is used to maintain a device's physical layer continuity when the device is powered down or malfunctioned - a unique and very important feature for a serial topology - and provide power for devices connected to the bus. As the standard evolves, new cable designs are expected to allow longer distances without repeaters and with more bandwidth.

At the heart of any IEEE 1394 connection is a physical layer and a link layer semiconductor chip, and IEEE 1394 needs two chips per device. The physical interface (PHY) is a mixed signal device that connects to the other device's PHY. It includes the logic needed to perform arbitration and bus initialisation functions. The Link interface connects the PHY and the device internals. It transmits and receives 1394-formatted data packets and supports asynchronous or isochronous data transfers. Providing both asynchronous and isochronous formats on the same interface allows both non-real-time critical applications, such as printers and scanners, and real-time critical applications, such as video and audio, to operate on the same bus. All PHY chips use the same technology, whereas the Link is device-specific. This approach allows IEEE 1394 to act as a peer-to-peer system as opposed to USB's client-server design. As a consequence, an IEEE 1394 system needs neither a serving host, nor a PC.

Asynchronous transport is the traditional method of transmitting data between computers and peripherals, data being sent in one direction followed by acknowledgement to the requester. Asynchronous data transfers place emphasis on delivery rather than timing. The data transmission is guaranteed, and retries are supported. Isochronous data transfer ensures that data flows at a pre-set rate so that an application can handle it in a timed way. This is especially important for time-critical multimedia data where just-in-time delivery eliminates the need for costly buffering. Isochronous data transfers operate in a broadcast manner, where one or many 1394 devices can "listen" to the data being transmitted. Multiple channels (up to 63) of isochronous data can be transferred simultaneously on the 1394 bus. Since isochronous transfers can only take up a maximum of 80 percent of the 1394 bus bandwidth, there is enough bandwidth left over for additional asynchronous transfers.

IEEE 1394's scaleable architecture and flexible peer-to-peer topology make it ideal for connecting high-speed devices: everything from computers and hard drives, to digital audio and video hardware. Devices can be connected to in either a daisy-chain or tree topology. The diagram depicts two separate work areas connected with a 1394 bridge. Work area #1 comprises a video camera, PC, and video recorder, all interconnected via IEEE 1394. The PC is also connected to a physically distant printer via a 1394 repeater, which extends the inter-device distance by redriving the 1394 signals. Up to sixteen hops may be made between any two devices on a 1394 bus. A 1394 splitter is used between the bridge and the printer to provide another port to attach a 1394 bus bridge. Splitters provide more topology flexibility for users.

Work area #2 contains only a PC and printer on a 1394 bus segment, plus a connection to the bus bridge. The 1394 bus bridge isolates data traffic within each work area. IEEE 1394 bus bridges allow selected data to be passed from one bus segment to another. Therefore PC #2 can request image data from the video recorder in work area #1. Since the 1394 cable is powered, the PHY signalling interface is always powered, and video data is transported even if PC #1 is powered off.

Each IEEE 1394 bus segment may have up to 63 devices attached to it. Currently each device may be up to 4.5 metres apart; longer distances are possible with and without repeater hardware. Improvements to the current cabling are being specified to allow longer distance cables. Over 1000 bus segments may be connected by bridges thus providing a large growth potential. An additional feature is the ability of transactions at different speeds to occur on a single device medium. For example, some devices can communicate at 100 Mbit/s while others communicate at 200 Mbit/s and 400 Mbit/s. IEEE 1394 devices may be hot-plugged - added to or removed from the bus - even with the bus in full operation. Upon altering the bus configuration, topology changes are automatically recognised. This "plug and play" feature eliminates the need for address switches or other user intervention to reconfigure the bus.

As a transaction-based packet technology, 1394 can be organised as if it were memory space interconnected between devices, or as if devices resided in slots on the main backplane. Device addressing is 64 bits wide, partitioned as 10 bits for network Ids, 6 bits for node Ids and 48 bits for memory addresses. The result is the capability to address 1023 networks of 63 nodes, each with 281TB of memory. Memory-based addressing, rather than channel addressing, views resources as registers or memory that can be accessed with processor-to-memory transactions. Fundamentally, all this means easy networking - for example, a digital camera can easily send pictures directly to a digital printer without a computer in the middle - and with IEEE 1394 it is easy to see how the PC could lose its position of dominance in the interconnectivity environment and be relegated to being no more than a very intelligent peer.

The need for two pieces of silicon instead of one will make IEEE 1394 peripherals more expensive than, say, SCSI, IDE or USB devices. Consequently it is inappropriate for low speed peripherals. However, its applicability to higher-end applications, such as digital video editing, is obvious and its clear that the standard is destined to become a mainstream consumer electronics interface - used for connecting handy-cams and VCRs, set-top boxes and televisions. To date, however, its implementation has been largely confined to digital camcorders, where is it known as iLink.

In 1997, Compaq, Intel and Microsoft proposed an industry standard called Device Bay. By combining the fast interface of IEEE 1394 with the USB interface, Device Bay offered a bay slot to slide in peripherals such as hard disks or DVD-ROM players. The following year, however, proved to be a somewhat troubled year for IEEE 1394, with Apple's announcement of what many believed to be exorbitant royalty claims for use of the technology deterring many semiconductor companies who had hitherto embraced the standard. Notwithstanding these issues - and largely because of its support of isochronous data transfer - by the start of the new millennium FireWire had established itself as the favoured technology in the area of video capture.

Indeed, it's use as a hard disk interface offers a number of advantages over SCSI. Whilst its maximum data transfer speed of 400 Mbit/s (equivalent to 50 MBps) isn't as fast as the Ultra160 SCSI standard, FireWire beats SCSI hands down when it comes to ease of installation. Where SCSI devices require a pre-assigned ID and both ends of the bus to be terminated, IEEE 1394 will dynamically assign addresses on the fly and does not require terminators. Like USB, FireWire devices are also hot-swappable, without the need to power down the PC during installation. Combined with its lack of previous stumbling blocks such as the assignment of IRQs or DMAs - these characteristics make IEEE 1394 perfect for trouble-free plug and play installations.

Despite all this, and the prospect of a number of motherboard manufacturers producing boards with built-in IEEE 1394 controllers in the second half of 2000, FireWire's future success was far from assured - the announcement of the proposed USB 2.0 specification at the Intel Developer Forum (IDF) of February 1999 serving to significantly complicate the picture.

USB 2.0

While USB was originally designed to replace legacy serial and parallel connections, notwithstanding the claims that they were complementary technologies, there can be little doubt that USB 2.0 specification was designed to compete with FireWire. Compaq, Hewlett-Packard, Intel, Lucent, Microsoft, NEC and Philips jointly led the development, with the aim of dramatically extending performance to the levels necessary to provide support for future classes of high performance peripherals.

At the time of the February 1999 Intel Developer Forum (IDF) the projected performance hike was of the order of 10 to 20 times over existing USB 1.1 capabilities. However, by the end of the year the results of engineering studies and test silicon indicated that that was overly conservative, and by the time the USB 2.0 was released in the spring of 2000, its specified performance was a staggering 40 times that of its predecessor.

USB 2.0 in fact defines three level of performance, with "Hi-Speed USB" referring to just the 480 Mbit/s portion of the specification and the term "USB" being used to refer to the 12Mbps and 1.5Mbps speeds. At 480 Mbit/s, any danger that USB would be marginalised by the rival IEEE 1394 bus appear to have been banished forever. Indeed, proponents of USB continue to maintain that the two standards address differing requirements, the aim of USB 2.0 being to provide support for the full range of PC peripherals - current and future - while IEEE 1394 specifically targets connection to audio visual consumer electronic devices such as digital camcorders, digital VCRs, DVD players and digital televisions.

While USB 1.1's data rate of 12 Mbit/s, was sufficient for many PC peripherals, especially input devices, the higher bandwidth of USB 2.0 is a major boost for external peripherals as CD/DVD burners, scanners and hard drives as well as higher functionality peripherals of the future, such as high resolution video conferencing cameras. As well as broadening the range of peripherals that may be attached to a PC, USB 2.0's increased bandwidth will also effectively increase number of devices that can be handled concurrently, up to its architectural limit.

USB 2.0 is fully backwards compatible - something that could prove a key benefit in the battle with IEEE 1394 to be the consumer interface of the future, given its already wide installed base. Existing USB peripherals will operate with no change in a USB 2.0 system. Devices, such as mice, keyboards and game pads, will not require the additional performance that USB 2.0 offers and will operate as USB 1.1 devices. Conversely, a Hi-Speed USB 2.0 peripheral plugged into a USB 1.1 system will perform at the USB 1.1 speeds.

While Windows XP did not support USB 2.0 at the time of its release in 2001 - Microsoft citing the fact that there were no production quality compatible host controllers or USB 2.0 devices available in time as the reason for this - support had been made available to OEMs and system builders by early the following year and more widely via Windows Update and the Windows XP SP1 later in 2002.