GeForce3 Frequently Asked Questions
tnaw_xtennis 2001.03.23


1. Why did nVidia decrease the memory clock of GeForce3 from 526MHz to 460MHz?

The SDRAM used on the GeForce3 is rated at 3.8ns, which translates to a maximum of 263MHz DDR (526MHz). However, the GeForce3 memory is only clocked at 460MHz. According to nVidia, that's because they get the best overall balance of performance and yield on GeForce3 boards by decreasing the memory clock. The good news is that there's built-in headroom for consumers to overclock the memory.


2. Which GeForce3 manufacturer should I choose?

All GeForce3 boards use the nVidia reference design, except the ASUS V8200 Deluxe. The main differences between them are a few features, shown in Table 1.

Table 1. The differences of GeForce3 cards

  DVI interface TV-out Others
ASUS V8200 Series (Deluxe / Pure) Yes / No Yes / No Deluxe version with video input,
video editing and 3D glasses
ELSA GLADIAC 920 No Optional /
GIGABYTE GA-GF3000DF Yes Optional /
Hercules 3D Prophet III Yes Yes /
Leadtek WinFast GeForce3 Yes Yes /
MSI MS-StarForce 822 No Yes /
VisionTek GeForce3 No Yes /

 

3. How about the antialiasing image quality of Quincunx AA of GeForce3?
nVNews has the most in-depth report on Quincunx AA quality to date. And here are some comments from GeForce3 reviewers:


4. Is the GeForce3 more powerful than a Pentium4 1.5GHz in geometry processing?
Historically, the geometry processing in 3D graphics-- i.e. transform and lighting (T&L)-- took place on the CPU. These calculations were recently handed over to the modern graphics processing units (GPUs), beginning with nVidia's GeForce 256, and continuing with subsequent nVidia processors. Though nVidia has claimed since its first GPU, GeForce 256, that GPUs of its graphics processors can process more T&L than that of the fasters CPUs at the time, the findings of Kyle (Ref 1, 2) on GeForce 256 questioned whether the GeForce GPU is going to be able to keep pace with a 1GHz CPU. In regard to the GPU of GeForce3, T&L current benchmarking results (Ref 3) show that GeForce3 GPU runs T&L faster than a Pentium4 1.5GHz when overclocked to 1.8MHz.


5. What benefits will GeForce3 bring to graphics applications that support and take advantage of T&L?

  1. Shifting the T&L computational load from CPU to GPU reduces the data bandwidth required of AGP, the link between the CPU and graphics processor. This means   more available bandwidth for actualization of geometry-rich scenes.
  2. The GeForce3 can process many times the geometry data of even the fastest CPUs. By processing more geometry data, visual quality can be dramatically improved. 
  3. By offloading the T&L burden from the CPU, more of the CPU is made available for other CPU intensive tasks such as game logic, physics, and AI.


6. Is the hidden surface removal (HSR) rendering mechanism of GeForce3, Z-Occlusion Culling and Occlusion Query, effective on today's 3D applications?
No, we have to wait for new applications or new versions of today's applications. These two technologies amplify the bandwidth of  the GeForce3 by increasing the efficiency of the memory bandwidth offered by the frame buffer, and by avoiding accessing the frame buffer altogether for pixels that would not be visible. In practice the typical benefit will average a 50%~100% improvement for applications that support Z-Occlusion Culling and Occlusion Query.


7. Out of curiosity, is the GeForce3 faster than the Voodoo5 6000, a $600 video card that will never be available in the retail channel?
Yes, we can get the result by doing some calculations. According to the multi-sampling antialiasing mechanism and video processor workload of VSA100 based cards (Table 2.),

Table 2
. Effective frame output and video processor workload of VSA100 based cards

Effective Frame Output Voodoo5 4500
(1 x VSA100)
Voodoo5 5500
(2 x VSA100)
Voodoo5 6000
(4 x VSA100)
1 x (no FSAA ) 1 x Frame / 1 chip 1 x Frame / 2 chips 1 x Frame / 4 chips
1 x (2x FSAA) 2 x Frame / 1 chip 2 x Frame / 2 chips 2 x Frame / 4 chips
1 x (4x FSAA) / 4 x Frame / 2 chips 4 x Frame / 4 chips
1 x (8x FSAA) /

/

8 x Frame / 4 chips

we can get the AA performance of Voodoo5 6000 from the benchmarking results of Voodoo5 5500:
Voodoo5 6000, 2x FSAA = Voodoo 5 5500, no FSAA
Voodoo5 6000, 4x FSAA = Voodoo 5 5500, 2x FSAA
Voodoo5 6000, 8x FSAA = Voodoo 5 5500, 4x FSAA

Considering assorted issues of 4 video chips working on Voodoo5 6000, fps of Voodoo5 6000 obtained by the above formulas can only be higher (not lower) than those obtained by benchmarking. Here is our GeForce3 vs. Voodoo5 6000 results.

 

And here is the GeForce3 FAQ we posted previously

1-1. Is GeForce3 really a new generation of video card?
1-2. What is the most compelling reason for consumers to buy a GeForce3 today?
1-3. What kind of performance results can I expect from GeForce3 in today's games?
1-4. Should I upgrade my CPU?
1-5. How much of a performance boost does the GeForce3 provide over the GeForce2 Ultra?
1-6. Will GeForce3 provide better antialiasing (AA) image quality than the GeForce2 Ultra?
1-7. How can the GeForce3 provide better performance than the GeForce2? It has the same video memory speed and lower video core speed!
1-8. How does the Lightspeed Memory Architecture boost the performance of the GeForce3?
1-9. How does Multisampling boost the AA performance of the GeForce3?

2-1. Why it cost much more to make a GeForce3 board than GeForce2 Ultra?
2-2. Who should buy GeForce3 immediately the moment it is available?
2-3. What is the maximum amount of frame buffer the GeForce3 supports?
2-4. How much power does the GeForce3 consume?
2-5. How many pins on the GeForce3 chip?
2-6. Will be there a Quadro3 i.e. professional version of GeForce3?
2-7. What is the target estimated selling price for GeForce3 board?
2-8. At present, can we preview those never-before-seen visual effects in games 6+ months away that taking advantage of GeForce3?


more tnaw_xtennis’s Analyses of Computer Hardware
!

 

Language edited by: wumpus, http://www.gamebasement.com