Deep probabilistic models for sequential and hierarchical data

Abstract: Consider the problem where we want a computer program capable of recognizing a pedestrian on the road. This could be employed in a car to automatically apply the brakes to avoid an accident. Writing such a program is immensely difficult but what if we could instead use examples and let the program learn what characterizes a pedestrian from the examples. Machine learning can be described as the process of teaching a model (computer program) to predict something (the presence of a pedestrian) with help of data (examples) instead of through explicit programming.This thesis focuses on a specific method in machine learning, called deep learning. This method can arguably be seen as sole responsible for the recent upswing of machine learning in academia as well as in society at large. However, deep learning requires, in human standards, a huge amount of data to perform well which can be a limiting factor.  In this thesis we describe different approaches to reduce the amount of data that is needed by encoding some of our prior knowledge about the problem into the model. To this end we focus on sequential and hierarchical data, such as speech and written language.Representing sequential output is in general difficult due to the complexity of the output space. Here, we make use of a probabilistic approach focusing on sequential models in combination with a deep learning structure called the variational autoencoder. This is applied to a range of different problem settings, from system identification to speech modeling.The results come in three parts. The first contribution focus on applications of deep learning to typical system identification problems, the intersection between the two areas and how they can benefit from each other. The second contribution is on hierarchical data where we promote a multiscale variational autoencoder inspired by image modeling. The final contribution is on verification of probabilistic models, in particular how to evaluate the validity of a probabilistic output, also known as calibration.

  CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)