Supervised by
Dr. rer. nat. Christoph Peters
Download
GitHub Repository
Abstract
Shadow computation is important for real-time rendering. Without shadows many scenes would look rather boring
and unrealistic. Computing realistic shadows is not easy, especially for real-time applications.
Therefore the technique we want to improve on is the real-time shadow technique shadow mapping,
more precisely cascaded shadow mapping for parallel global light sources.
Cascaded shadow mapping tries to improve the effective resolution by splitting the view frustum in
ranges of depth, each assigned to their own shadow map, that encloses said range. These ranges of depth are thus called cascades.
In order to achieve an improvement in visual quality,
at the cost of manageable overhead, we introduce a technique to compute the split positions
for cascaded shadow mapping for four cascades.
Non-adaptive split computation suffers the problem of leading to bad results for many scenes.
We want to improve this by considering the depth distribution of
the camera, thus falling into the category of sample distribution shadow maps.
The depth distribution consists of the depth buffer values of the camera and
provides us with a spatial knowledge about the geometry of the scene.
Adaptive split computation methods, like finding the minimum and maximum of the
scene depth and redefining near and far plane, are better than non-adaptive methods but still suffer from problems
like being slow or generating empty cascades.
In order to improve on these problems, we use a construct from stochastics called moments. The n-th power moment consists of the
sum over all samples of a given distribution, taken to the power of n and divided by the number of samples.
We use eight power moments in order to estimate the given depth distribution,
resulting in a proper partition of the view frustum. In order to compute the moments, we propose a reduce shader.
The main goal of this technique is to find depth
intervals or clusters, where all values are adjacent. Placing a split between two of these intervals is a good
idea, because, if we also use bounding box fitting in order to minimize the size of the generated cascades,
then we neither end up with cascades being inflated excessively, nor with cascades being empty.
In order to do that, we calculate the moment-based reconstruction of the cumulative histogram.
This reconstruction acts like a smooth density estimation.
Calculating the splits consists of calculating the minima found in the gradient of the reconstruction.
We approximate this gradient by using moments, resulting in the reciprocal of a polynomial.
In order to find the minima we just have to find the maxima of this polynomial. This consists of finding the
roots of the derivative of the polynomial and checking for maxima.
For finding these roots we use Laguerre's method.
Should we not find enough minima, in order to set three splits, we will set splits between the cascades generated by
the already found splits, such that the maximum of all approximated screen areas for the new cascades becomes minimal.
We thus try to distribute the screen evenly among the cascades, under the consideration, that we use any minima we found as a split.
We recommend warping the depths before that computation in order to minimize the impact of the perspective
projection. Excluding pixels that look at no geometry, for instance, is achieved with a
binary frame buffer texture that stores the validity of a pixel in the color channel.
For the shadow computation we use percentage closer filtering with a 3x3 kernel and
bilinear interpolation.