Tuesday, April 15, 2008
The light piercing the fog in the top image is smooth, realistic and computationally light-weight. Why? Because UC San Diego computer scientists have figured out a way to take the super-realistic-but-computationally-taxing "photon mapping" approach and put it on a computational diet. The result? A more efficient way to get extremely life like computer generated images, especially of scenes where light is passing through fog, smoke, dust or other "participating media."
The bottom image was created using the conventional photon mapping approach, but given the same computational "budget." You can get the same high quality image using the conventional approach, but it's going to take you a long time. Why? Because you have to sample the light at many many many points along the ray from the camera to the objects in the scene. If you don't do tons of sampling, then you'll miss light that is hanging out between the camera and the objects in the scene. And when you start missing a bunch of light, your image starts to get noisy, which is what you're seeing in the bottom image. Since there was not much room for heavy computation, the conventional approach to photon mapping had to make do with an inadequate number of light collection points along the ray.
Read the full press release here.
Check out the paper here.