Supervised by M.Sc. Addis Dittebrandt
Download
GitHub Repository

Abstract

As modern graphics hardware improves in the field of ray tracing acceleration, it becomes increasingly apparent that it will be used more frequently in real-time applications. Path tracing thus marks the current direction of research regarding modern real-time rendering. Even with modern hardware it is still not feasible to trace more than one path per pixel. This leads to a severe amount of variance which manifests itself as noise. One way to reduce that noise is by filtering. Filtering spatial regions results in the technique called path space filtering. Filtering over spatial regions results in a biased estimator. The image quality is thus determined by a variance-bias trade-off. This thesis introduces two techniques to control this variance-bias trade-off by using the spatial structure provided by path space filtering to estimate the variance of a spatial region. Based on these variance estimations this paper derives different ideas to both improve the image quality and also improve frame times. The first technique introduces path survival and interpolation between path tracing and path space filtering by analyzing the variance on the primary surface, hence the name adaptive path space filtering for primary surfaces. It achieves to improve the frame times of path space filtering beyond the frame times of path tracing while also generating better image quality in some cases. The second technique analyzes the variance along paths and terminates into a spatial cell once the variance exceeds a given threshold. It is thus called adaptive path graphs. While being computationally heavy, it generates interesting results regarding the variance-bias trade-off and can handle difficult situations, especially if combined with the first technique.
Supervised by Dr. rer. nat. Christoph Peters
Download
GitHub Repository

Abstract

Shadow computation is important for real-time rendering. Without shadows many scenes would look rather boring and unrealistic. Computing realistic shadows is not easy, especially for real-time applications. Therefore the technique we want to improve on is the real-time shadow technique shadow mapping, more precisely cascaded shadow mapping for parallel global light sources. Cascaded shadow mapping tries to improve the effective resolution by splitting the view frustum in ranges of depth, each assigned to their own shadow map, that encloses said range. These ranges of depth are thus called cascades. In order to achieve an improvement in visual quality, at the cost of manageable overhead, we introduce a technique to compute the split positions for cascaded shadow mapping for four cascades. Non-adaptive split computation suffers the problem of leading to bad results for many scenes. We want to improve this by considering the depth distribution of the camera, thus falling into the category of sample distribution shadow maps. The depth distribution consists of the depth buffer values of the camera and provides us with a spatial knowledge about the geometry of the scene. Adaptive split computation methods, like finding the minimum and maximum of the scene depth and redefining near and far plane, are better than non-adaptive methods but still suffer from problems like being slow or generating empty cascades. In order to improve on these problems, we use a construct from stochastics called moments. The n-th power moment consists of the sum over all samples of a given distribution, taken to the power of n and divided by the number of samples. We use eight power moments in order to estimate the given depth distribution, resulting in a proper partition of the view frustum. In order to compute the moments, we propose a reduce shader. The main goal of this technique is to find depth intervals or clusters, where all values are adjacent. Placing a split between two of these intervals is a good idea, because, if we also use bounding box fitting in order to minimize the size of the generated cascades, then we neither end up with cascades being inflated excessively, nor with cascades being empty. In order to do that, we calculate the moment-based reconstruction of the cumulative histogram. This reconstruction acts like a smooth density estimation. Calculating the splits consists of calculating the minima found in the gradient of the reconstruction. We approximate this gradient by using moments, resulting in the reciprocal of a polynomial. In order to find the minima we just have to find the maxima of this polynomial. This consists of finding the roots of the derivative of the polynomial and checking for maxima. For finding these roots we use Laguerre's method. Should we not find enough minima, in order to set three splits, we will set splits between the cascades generated by the already found splits, such that the maximum of all approximated screen areas for the new cascades becomes minimal. We thus try to distribute the screen evenly among the cascades, under the consideration, that we use any minima we found as a split. We recommend warping the depths before that computation in order to minimize the impact of the perspective projection. Excluding pixels that look at no geometry, for instance, is achieved with a binary frame buffer texture that stores the validity of a pixel in the color channel. For the shadow computation we use percentage closer filtering with a 3x3 kernel and bilinear interpolation.