Introduction
Whenever a newbie comes to a forum and asks the following question :
"I’m having some strange problems with my 3D graphics program. When my objects move into the distance, they seem to flicker. Sometimes the object that is suppose to appear behind pops up in front."

Experienced programmers recognize the issue immediately and they know how to deal with it. A typical answer goes like this :
"The problem you are experiencing is called z-fighting. This is because when you are using a perspective view, you get more depth precision near the near plane and the precision decreases as you reach the far plane. You have to push your near plane as far as possible and move you far plane as near as possible."

That’s the typical answer to that question and usually it is also suggested to use the near plane value of 1.0 instead of anything smaller.

I have seen many games (games that have been at the top of the charts) that exhibit the same issue. There are many professional modeling packages that use a trick to deal with this problem. They use a dynamic near and far plane.

Some people have come to the OpenGL forums suggesting that it should have support for w-buffers. And why not, since Direct3D has support for them.

Guess what? OpenGL doesn’t need a w-buffer and it never did since it was first created. Neither did it’s parent, IrisGL. And neither does any other 3D graphics API for that matter.

Some Background Information
Let’s talk about the perspective matrix and a few 3D graphics APIs.

In OpenGL, the perspective matrix is calculated like this and is setup using glFrustum or using gluPerspective (a GLU function):

[2*near/(right-left) 0 (right+left)/(right-left) 0]
[0 2*near/(top-bottom) (top+bottom)/(top-bottom) 0]
[0 0 (-far-near)/(far-near) -2*far*near/(far-near)]
[0 0 -1 0]

In Direct3D (version 8), there isn’t a similar function to OpenGL’s glFrustum but you can use the utility library for D3D.

The functions available are
1. D3DXMatrixPerspectiveLH(D3DMATRIX *pOut, FLOAT w, FLOAT h, FLOAT zn, FLOAT zf);
2. D3DXMatrixPerspectiveRH(D3DMATRIX *pOut, FLOAT w, FLOAT h, FLOAT zn, FLOAT zf);
3. D3DXMatrixPerspectiveFovLH(D3DXMATRIX *pOut, FLOAT fovy, FLOAT Aspect, FLOAT zn, 4. FLOAT, zf);
4. D3DXMatrixPerspectiveFovRH(D3DXMATRIX *pOut, FLOAT fovy, FLOAT Aspect, FLOAT zn, 5. FLOAT, zf);
5. D3DXMatrixPerspectiveOffCenterLH(D3DXMATRIX *pOut, FLOAT l, FLOAT r, FLOAT b, FLOAT t, FLOAT zn, FLOAT zf);
6. D3DXMatrixPerspectiveOffCenterRH(D3DXMATRIX *pOut, FLOAT l, FLOAT r, FLOAT b, FLOAT t, FLOAT zn, FLOAT zf);

I will talk about the very last one because this one resembles glFrustum.

[2*zn/(r-l) 0 (l+r)/(r-l) 0]
[0 2*zn/(t-b) (t+b)/(t-b) 0]
[0 0 zf/(zn-zf) zn*zf/(zn-zf)]
[0 0 -1 0]


There are plenty of other 3D APIs each with their own method but I don’t have time to go into them. Examples are HP’s PHIGS, HP’s PHIGS+, HP’s PEX, HP’s Starbase, Discreet’s Heidi, Pixar’s RenderMan, Criterion’s Renderware, BRender (old software renderer in the 80486 days) , GKS (not used anymore), Hoops, Reality Lab (bought by MS and turned into Direct3D), MultiGen’s GameGen, Apple’s QuickDraw3D, Apple’s Rave, Sun’s Java3D, Fahrenheit (MS’s and SGI’s but never came into existence), OpenGL++ (collaboration between SGI, Intel, IBM and other ARB members), Cosmo3D, IrisGL, MESA (or MESA3D or MESA GL) and probably a whole lot of others.

Now let’s get back to the two giants known as OpenGL and Direct3D.

Let’s assume we make this call for GL :
glFrustum(-1.0, 1.0, -1.0, 1.0, 1.0, 200.0);

and in D3D it would be:
D3DXMATRIX ProjectionMatrix;
D3DXMatrixPerspectiveOffCenterRH(&ProjectionMatrix, -1.0, 1.0, -1.0, 1.0, 1.0, 200.0);

If we render a graph of the "x values versus projected and w-divided x values", it looks like this (for both APIs, it’s the same, and also, we are dividing by w=1 for this graph):


plot((2.0*1.0/(1.0-(-1.0))*x)/(1.0), x=-10..10); (Using Maple V)

If we render a graph of the "z values versus projected and w-divided z values" for GL, it looks like this :


plot((((-200-1)/(200-1))*z+(-2.0*200*1.0/(200-1)))/(-z), z=-1..-10); (Using Maple V)

If we render a graph of the "z values versus projected and w-divided z values" for D3D, it looks like this :


plot((200/(1-200)*z+1*200/(1-200))/(-z), z=-1..-10); (Using Maple V)

The difference is that GL maps the depth values to -1.0 to 1.0 which are remapped to the user’s desired range, typically 0.0 (near) to 1.0 (far). In D3D, the values are mapped to 0.0 to 1.0 and that’s it. But essentially, the same problem appears for both APIs: z-fighting as you move towards the far clip plane in a perspective view.

The Solution
When transforming vertices by the projection matrix, what is computed is the clip space coordinates and what comes right after this operation is division by w (perspective division). The source of the problem is here.

In the case of an orthographic projection, the clip space w is always 1, and therefore this results in a linear transformation. In the case of a perspective projection, the clip space w is always –z, and this results in a non-linear transformation.

What could be done to remedy the situation, is to have all x and y values transformed as usual, but for z values, we can apply the equation done in a orthographic projection. This gives as perspective x and y values paired with an orthographic z value.

To do this operation, vertex programming could be used (GL_ARB_veretx_program) :

!!ARBvp1.0

#This vp is used in zprecision demo for solving the z-precision problem
#in a perspective projection.
#
#AUTHOR: Vrej M.
#DATE: Monday, June 23, 2003

PARAM m[4]={state.matrix.modelview};
PARAM mvp[4]={state.matrix.mvp};

ATTRIB iPos=vertex.position;
ATTRIB iColor=vertex.color;
ATTRIB iTex0=vertex.texcoord[0];
ATTRIB NumberNine=vertex.texcoord[1];

OUTPUT oPos=result.position;
OUTPUT oColor=result.color;
OUTPUT oTex0=result.texcoord[0];

TEMP r0, r1, temp;


#Transform the vertex to clip coordinates.
DP4 r0.x, iPos, mvp[0];
DP4 r0.y, iPos, mvp[1];
DP4 r0.z, iPos, mvp[2];
DP4 r0.w, iPos, mvp[3];
RCP temp.w, r0.w;
MUL r0, r0, temp.w;
DP4 r1.x, iPos, m[0];
DP4 r1.y, iPos, m[1];
DP4 r1.z, iPos, m[2];
DP4 r1.w, iPos, m[3];
DP4 r0.z, r1, NumberNine;
MOV oPos, r0;


MOV oColor, iColor;

#We must take care of perspective effect on texturing
MUL oTex0, iTex0, temp.w;

END

The result looks good (I should say perfect), but I think that in certain circumstances, it could lead to problems. It could be that it will give artifacts if you are doing shadow mapping to do your shadows.

A second solution I found was while experimenting. This one doesn’t use vertex programming and it could be that it will not work on every graphics card out there. It involves entering certain values in the projection matrix, and then uploading it with glLoadMatrix (or glMultMatrix if you prefer).

The orthographic matrix in OpenGL looks like this :

[2/(right-left) 0 0 (-right-left)/(right-left)]
[0 2/(top-bottom) 0 (-top-bottom)/(top-bottom)]
[0 0 -2/(far-near) (-far-near)/(far-near)]
[0 0 0 1]

What I did was I replaced the third row of the perspective matrix with the third row of this matrix. The functions known as glhMergedPerspectivef and glhMergedPerspectived does just this and calls glMultMatrix. I was surprised to find that this worked on my system. I’m not sure why since division still occurs or is suppose to occur. From the screenshots I took of the z-buffer while testing things out, I noticed that the values looked quite different. I think that division by w was not occurring on my Nvidia card.

Well try it out yourself and see what happens. Download is at the bottom. You will need the glh library to compile and execute, or just write your own glhMergedPerspectivef and glhMergedPerspectived.

Further Readings
http://www.sgi.com/software/opengl/advanced97/notes/node18.html#SECTION00063000000000000000

and of course, The Red Book.

Download
This is a GLUT based application demonstrating what has been discussed here :
zprecision.zip




This page is http://www.oocities.org/vmelkon/zprecision.html
This page is http://ee.1asphost.com/vmelkon/zprecision.html
Copyright (C) 2001-2003 Vrej M. All Rights Reserved.