Sunday, February 14, 2010

Wavefront rendering

I've been thinking about path tracing a lot lately, and I came up with a bad idea. Why it's bad is interesting though, as it demonstrates partly why physically-based rendering is so hard (complexity). I think it also validates path tracing as a good approach.

So, my idea was to take each emitter mesh, and propagate a single wavefront out through the scene. The mesh would be simplified as it propagated (retopologising, perhaps using Delauney triangulation). Once the wavefront was at a low enough intensity, it would not be reflected. The camera would then either receive all wavefronts on an image plane (which could potentially be stored as a hologram) or just raytrace the lit scene (seemed similar to radiosity).

Such a scheme would be able to store polarisation and phase in the wavefront, and thus calculate interference effects at surfaces (if previous wavefronts that hit the surface were recorded), and diffraction around barriers would be possible as well.

The problem with this approach is complexity. An area light at the top of a Cornell box is square. The left side sends light to the right, and the right side sends light to the left. A single mesh is not able to capture that much information -- my approach above was too simplistic.

The next step, then, is to make each part of the mesh emit spherical waves. We could then propagate them using the Huygens-Kirchoff principle if we cared about diffraction effects. This gets very complicated, as we almost need to store the state of light throughout the volume of interest, at a resolution good enough to observe interference effects (otherwise there's not much point using secondary waves at all). If we do not use secondary waves during propagation, but merely propagate the spherical waves outward, then this is a very complicated way of doing photon mapping.

So I conclude that this method, while interesting, is not practical, except for very specific scenes which may require such modelling.

Diffraction and interference are probably still better done storing and propagating phase with rays and image pixels - e.g. having 8 images, each storing the accumulated intensity for photons with a certain phase, and combining them to show interference effects at the end of rendering. Actually that sounds kind of cool, I'll have to try that one day. Diffraction could be done by using fuzzy intersection code and bending rays a random amount toward an edge when they went past an edge.

Edit: Someone's already thought of this, and called it Beam tracing.

No comments: