Volume-Aware Extinction Mapping

Abstract

The simulation of the lighting effects produced by the interaction of participating media with the light contributes to the production of visually compelling effects such as translucence and volumetric shadowing. However, the complex inner structure of participating media requires vast amounts of memory for storage and costly computations for rendering. The cost of offline lighting estimation within volumes is usually reduced by caching strategies such as Deep Shadow Maps [Lokovic and Veach 2000], in which lists of volume samples are built to represent light attenuation. These lists are then ordered, filtered and compressed prior to rendering. Real-time rendering methods such as [Jansen and Bavoil 2010; Delalandre et al. 2011] avoid the need for list management by projecting light attenuation into a Fourier basis. While effective, the empty sections between participating media are encoded as well in the Fourier transform, leading to ringing artifacts when used in complex environments. Volume-Aware Extinction Maps project and integrate spatially un ordered sets of volume samples within a volume-aware functional space for high quality interactive rendering of production scenes.

Publication
In proceedings of SIGGRAPH 2012 Talks. Los Angeles. Silence! Eliminate the Noise Talk Session.
Date
Next
Previous