Files

Abstract

Physically-based rendering algorithms generate photorealistic images of virtual scenes. By simulating light paths in a scene, complex physical effects such as shadows, reflections and volumetric scattering can be reproduced. Over the last decade, physically-based rendering methods have become efficient enough for widespread use. They are used to synthesize realistic imagery for visual effects, animated movies and games, as well as architectural, product and scientific visualization. We investigate the use of physically-based rendering for inverse problems. For example, given a set of images (e.g., photographs of a real scene), we would like to reconstruct scene geometry, material properties and lighting conditions that when rendered reproduce the provided reference images. Such a task can be formalized as minimizing the difference between reference and rendered images over the space of scene parameter. The resulting non-linear objective functions can be minimized by using a differentiable renderer and gradient descent. However, the complexity of physically-based light transport algorithms makes it infeasible to compute parameter gradients by naively using off-the-shelf automatic differentiation (AD) tools. In this thesis, we present several novel algorithms that efficiently and accurately compute gradients of a physically-based renderer. First, conventional AD cannot scale to the complexity that is due to long light paths. For example, differentiable rendering of participating media requires differentiating light paths with potentially hundreds of interactions. We introduce path replay backpropagation, an unbiased method that enables differentiation of multiple-scattering light transport at a computational and storage complexity similar to the original simulation. Leveraging the invertibility of local Jacobians, our method efficiently differentiates even perfectly specular scattering and unbiased volume rendering using delta tracking. Path replay backpropagation is the first unbiased differentiable rendering method that scales to an arbitrary number of differentiated variables and an unbounded number of scattering events. Second, differentiating a rendering algorithm requires handling parameter-dependent discontinuities due to occlusion. This is essential for using such methods to reconstruct the geometry of objects. We propose a method that accurately differentiates renderings of surfaces represented by signed distance functions (SDFs). We leverage their spatial structure to construct a reparameterization of the integration domain. This reparameterization accounts for gradients due to occlusion changes and enables image-based reconstruction of objects of arbitrary topology, without requiring strong priors or knowledge about object silhouettes. Lastly, inverse rendering has led to renewed interest in developing non-standard scene representations that are amenable to optimization. In the last part of the thesis, we introduce a novel transmittance model for participating media. This model allows representing scenes containing opaque surfaces as a scattering volume. This unified representation can be used to compress complex scenes into a sparse volumetric data structure. The compressed representation is visually almost identical to the original high-resolution scene. Our new model further benefits inverse rendering, recovering relightable volumetric representations that more faithfully capture opaque surfaces than prior models.

Details

PDF