Introduction |
The three dimensional life-like images that awe the public and which improve at a harried rate in game titles such as Half-Life and Quake or Pixar's marvels Toy Story or Toy Soldiers, are all possible with mathematical intense operations and engineering ingenuity (trickery in some cases). 2D graphics merely consider the axis 'x' and 'y' which are simple calculations for your processor. For the most part, these are pre-rendered sprites (not real-time rendered) which present the same image when viewed from different angles. You have been exposed to these bland depictions through popular game titles such as Age of Empires and the venerable Civilization series. However, 3D graphics introduces a new variable, the 'z' axis. With this comes depth which introduces the need to constantly recalculate the vertices as the object or viewer moves. Of course simply connecting the vertices would produce a bland skeletal wireframe object; this introduces the implied need to apply textures. Furthermore depth introduces the possibility of lighting and shading in itself a processor intense operation. Three dimensions have always been a possibility for high-end workstations and super-computers, but it wasn't until about early '97 when it came to the desktop with higher processor speeds and the entry of the video card accelerators such as 3dFx (sadly no longer with us having sold all their assets to nVidia) which made those wireframes more visually appealing. This section will explain this process and the terminology related that brings a seemingly lackluster group of pixels to life.
Pipeline |
A three dimensional object starts off as a conglomeration of pixels mapped into polygons (for the most part triangles) giving us a wireframe. Geometry transformation, the intense number crunching when a 3D object moves on screen, is the basic math of 3D graphics. Each object requires the continual need to recalculate the vertices of the polygons. The coordinates of those vertices are single precision, high bit floating point values handled by the host CPU. In lay men's terms this means that much processing muscle is required. Why? Well, for the most part, integers (1,2,3) will suffice for sprites or 2D objects; however, floating point numbers (1.2345) are required for their subsequent accuracy. Which is easier to calculate: 1+2 or 1.23+4.56? The vector and triangle data is then fed to the graphic boards which apply the visual effects such as texture mapping and anti-aliasing bringing this object to life.
Visual Effects |
After the CPU relinquishes the vector and triangle data along with lighting, the 3D hardware accelerator card now consumes it and this is where much of the awe originates.
Texture Mapping
In texture mapping, a 2-D bitmapped image, stored in memory is applied to a 3-D object to make its surface realistic (a wood grain applied to a door). Higher resolution images naturally supply better refinement. Today's games support 512x512 resolution (QuakeII 64x64 and Unreal 256x256) textures which sacrifice processing resources for the resulting appeal.
Multi-texturing
Multi-texturing is the ability to layer more than one texture on top of one another on a polygon in order to stimulate spectacular visual effects. It’s imperative that these multiple textures be rendered in a single processing cycle. To execute, an accelerator needs multiple processing pipelines - one pipeline to digest each texture layer. Voodoo2 and up are capable of rendering multiple textures in a single-rendering cycle. Other video cards will either not do multitexturing at all (such as i740, Riva128, and Permedia2). Those that do so take a performance hit due to the repeated rendering cycles (such as Matrox’s MGA-200 and s3's Savage3D).
Volumetric Lighting
Now instead of textures merely being pasted to the surface of a polygon object, with this new feature they permeate an object.
Anti-aliasing
Anti-aliasing is a filtering technique that smoothes textures to prevent the "stair-case" effect which becomes more noticeable as the object comes closer to the viewer and reduces speckling when you zoom out. Edge anti-aliasing smooths the edges of triangles by calculating lines between vertices. Perpixel anti-aliasing (also known as sort-independent or scene anti-aliasing) artificially expands the resolution of a scene in the frame buffer memory, performs an averaging operation, and then resamples to the correct resolution.
Anisotropic Filtering
Conventional texture filtering techniques don’t compensate for anisotropy: the elongation of the screen pixel when it’s mapped into texture space. This cuts down on distortion when textures are applied to tilted sufaces. In other words, it improves the look of texture images that are viewed at an angle.
Mip Mapping
Mip-mapping is a type of anti-aliasing and is performed either per pixel or per polygon. Trilinear mipmapping applies textures to surfaces using tiny bitmapped files. It also adjusts the level of detail visible in each textured surface, depending on that surface’s distance from the viewer.
Bilinear/Trilinear Filtering
Bilinear and trilinear texturing are also types of anti-aliasing. Bilinear filtering achieves a less pixelated look by blending colors. Trilinear filtering employs both bilinear and mip-mapping.
Bump-mapping
This is a technique of bringing textured surfaces into relief by varying the way in which objects reflect light. Bump-mapping changes their colors and shadows depending on the intensity and direction of the light hitting the surface. When environmental light sources hit the grid points, reflection and shadowing effects appear according to the map pattern.
Raytracing
This is a method of following light rays and reflections to create ultra-real 3D objects.
Alpha-blending/ Fogging mode
This makes possible the ability to assign a pixel a "translucency" value that will render it solid, invisible, or opaque when mapped onto polygons. Applications of this include engine glows, explosions, weapon discharge, water, and glass. For alpha-blended transparency effects a smooth transition from the texture's colors to full transparency is a must. There are two common ways to approach alpha blending and fogging: pixel-based or triangle/ polygon based rendering with the former achieving the more desirable results at the expense of system resources. Another term that is related is specular highlighting which further heightens realism.
Gouraud Shading
Gouraud shading is the process of shading 3D polygons to create realistic environments. It shades light levels smoothly allowing curved surfaces and contours to appear rounded instead of faceted.
Dithering
Dithering improves color resolution and smoothes light shading. Dithering has to do with how dots of the printer’s primary colors (cyan, yellow, magenta, and black aka CMYK) are amalgamated to produce other colors. It is achieved by either fine dither dots or coarse dither dots, with the former producing a much higher quality at the expense of system resources. In contrast, error diffusion uses variable sized dots, creating smoother images than either dithering option. The advantage of dithering (particularly coarse) is that photocopies of the printed output look much cleaner. Copies of error diffused images tend to appear smudged.
Visual Defects |
As the saying goes, nothing is perfect. The following rendering defects can be rectified by a simple driver update while others are forever embedded in the silicon. Some of the undesirables include texture seams which display as cracks and black lines where different textures collide. Artifact is a graphical flaw caused by the shortcomings of a compression technology. Often manifested as blotchiness in what should be a solid color. Banding is extraneous lines in a printed page or displayed image. On a monitor, occurs when the color depth of the video signal isn’t rich enough to display a continuous color gradient. When an image’s color depth is lowered it is said to be dithered down. When this happens, any lost color data may be seen by the naked eye as dotted patterns or extraneous artifacts. Pixelation is the effect of individual picture elements becoming visible. Generally this is the result of undersized texture maps being magnified when the view is set close to the polygon object.
Miscellaneous Notes |
Fill rate - number of pixels that a card pumps on-screen. Measured in millions of pixels per second.
Nothing indicates rendering speed better than frame rates. Unless you’re willing to get a faster 3D accelerator (frame rates, in general, are affected most by the accelerator itself), the only way you’ll be able to tweak your performance for faster frame rates is to overclock. Most profound factors are CPU speed, system-bus speed, and drivers.
Email me at cimes@compourri.com with your questions, complaints, or if you just want to talk.