Realistic Real-Time Rendering of Global Illumination and Hair through Machine Learning Precomputations

Abstract: Over the last decade, machine learning has gained a lot of traction in many areas, and with the advent of new GPU models that include acceleration hardware for neural network inference, real-time applications have also started to take advantage of these algorithms. In general, machine learning and neural network methods are not designed to run at the speeds that are required for rendering in high-performance real-time environments, except for very specific and typically limited uses. For example, several methods have been developed recently for denoising of low quality pathtraced images, or to upsample images rendered at lower resolution, that can run in real-time. This thesis collects two methods that attempt to improve realistic scene rendering in such high-performance environments by using machine learning. Paper I presents a neural network application for compressing surface lightfields into a set of unconstrained spherical gaussians to render surfaces with global illumination in a real-time environment. Paper II describes a filter based on a small convolutional neural network that can be used to denoise hair rendered with stochastic transparency in real time.

  CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)