Statistical inference with deep latent variable models

Abstract: Finding a suitable way to represent information in a dataset is one of the fundamental problems in Artificial Inelegance. With limited labeled information, unsupervised learning algorithms help to discover useful representations. One of the applications of such models is imputation, where missing values are estimated by learning the underlying correlations in a dataset. This thesis explores two of unsupervised techniques: stacked denoising autoencoders and variational autoencoders (VAEs). Using stacked denoising autoencoders, we developed a consistent framework to handle incomplete data with multi-type variables. This deterministic method improved missing data estimation compared to several state-of-the-art imputation methods. Further, we explored variational autoencoders, a probabilistic form of autoencoders that jointly optimize the neural network-based inference and generative models. Despite the promise of these techniques, the main difficulty is an uninformative latent space. We propose a flexible family, Student's t-distributions, as priors for VAEs to learn a more informative latent representation. By comparing different forms of the covariance matrix for both Gaussian and Student's t-distributions, we conclude that using a weakly informative prior such as the Student's t with a low number of parameters improves the ability of VAEs to approximate the true posterior.Finally, we used VAEs both with the Gaussian and Student's t-priors as multiple imputation methods on two datasets with missing values. Moreover, with the provided labels on these datasets, we used a supervised network and evaluated the estimation of missing variables. In both cases, VAEs show improvements compared to other methods.

  CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)