On the Robustness of Statistical Models: Entropy-based Regularisation and Sensitivity of Boolean Deep Neural Networks

Abstract: Models like deep neural networks are known to be sensitive towards many different aspects of noise. Unfortunately, due to the black-box nature of these models, it is in general not known why this is the case. Here, we analyse and attack these problems from three different perspectives. The first one (Paper I) is when noise is present in training labels. Here we introduce a regularisation scheme that accurately identifies wrongly annotated labels and sometimes trains the model as if the noise were not present. The second perspective (Paper II) studies the effect of regularisation in order to reduce variance in the estimation. Due to the bias-variance trade-off, it is a hard task to find the appropriate regularisation penalty and strength. Here we introduce a methodology to reduce bias from a general regularisation penalty to make the estimation closer to the true value. In the final perspective (Paper III), we study the sensitivity that deep neural networks tend to have with respect to noise in their inputs, in particular, how these behaviours depend on the model architecture. These behaviours are studied within the framework of noise sensitivity and noise stability of Boolean functions.

  This dissertation MIGHT be available in PDF-format. Check this page to see if it is available for download.