SNIPPETS
Here are links to Dark Basic Pro code snippets i wrote, along with brief descriptions.

ANAGLYPH

Makes realtime anaglyph images. Basically takes two pictures, then plasters them onto rectangular objects (planes) that are viewed by another camera and rendered to the screen. The objects are coloured using vertex colours, and one is light ghosted.

The system works quite well! There is some of the "wrong" image seen by each eye. Whether this is because of imperfect glasses, the long tail of the spectra produced by the screen phosphors, or the spread of the electron stream aimed at them, i do not know. Basically there seems to be a bit of a black art of making images where the ghostly crossover isn't distracting. Also you need to use a decent amount of red and cyan change in the texture of a surface or boundary or things will only be seen by one eye. Plus people need the right kind of 3d spectacles. All this adds up to a pain in the bum, so i left this at that.

After writing this, dedicated commands have been added to DBP to make anaglyphs, so my version is probably redundant. However, it's still quite an interesting curiosity and allows you to see how the system works

HYPERSPHERICAL UNIVERSE

How to explain? Best just click on the thread linked above. This is related to the spherical camera, in that it uses the same kind of angle preserving, or circle preserving mapping. It's pretty sweet, but causes quite a few problems. If objects or light ranges are large relative to the universe, abberations become noticable. For large objects vertexdata manipulation could be used. For long range lights, you'd either have to work out vertex lighting manually and rewrite the vertex diffuse values, or perhaps do this with a shader.

Another problem is that with large objects, you can project them onto the hypershere (like projecting a 2d image onto a globe), but for very large objects going all the way around, you'd need a bespoke object editor.

Another annoying thing is that, unless you limit the draw distance (by using fog, say), things on the opposite side of the hypersphere fill the sky, which is really wierd. And then you have the problem that fog is based on z-depth, (which is a constant source of irritation for me!) and it's just not worth bothering with any more.

It's still cool though!

EXPLODING OBJECTS USING VERTEXDATA COMMANDS

Click the link! Vertexdata commands are awesome. Manually moving vertices to move polygons of an object around is hugely faster than getting DBP to move around loads of separate objects or limbs. DBP sucks when you use lots of objects. Making a workaround for that can get you great gains in framerate. Check it out. Here's a
dedicated page.

SPHERICAL AND CYLINDRICAL CAMERAS

A while ago i made a cylindrical camera by taking a large number of strip camera images, and projecting these onto strips viewed by another camera which renders to the screen. This uses tens of cameras.

A much better solution is to take a few images, then project these onto curved objects. Using this method i created spherical and cylindrical cameras, using as little as two cameras for low fields of view, and rendering fields of view larger than the standard perspective camera can achieve, by using additional cameras. Further discussion at the link above. This i think i may well use for something.