Deep Learning on the Edge : A Flexible Multi-level Optimization Approach

Abstract: Recent advances in Deep Learning (DL) research have been adopted in a wide variety of applications, including autonomous driving, AI in health care, and smart homes. In parallel, research in high-performance embedded computing has resulted in advanced hardware platforms that offer enhanced performance and energy efficiency for demanding computations. However, the high demands of DL models for computational and memory resources are still a challenge for embedded computing. Algorithmic optimizations can be used to reduce the computational and memory requirements of DL models. Hardware implementations and architectures can also be tuned to support DL applications’ requirements. This thesis identifies that insufficient coordination between hardware implementations and models’ optimizations limits the efficiency of the resulting implementations. In addition, the implementation methods themselves suffer from poor flexibility in adapting to changes in the model and application constraints. The overarching theme of this thesis is to study and propose methods for the efficient and flexible implementation of DL models on embedded platforms. The work in this thesis bridges the gap between DL models’ algorithmic optimizations and embedded platforms’ hardware-specific optimizations, and investigates the features that need support from DL domain-specific architectures. In addition, a method for multi-objective quantization of DL models is proposed to address both the model error and platform performance metrics. Post-training optimization techniques are employed to facilitate the multiobjective optimization of the models because they do not require retraining after model optimization. This thesis also reviews the optimization methods that are known to have been applied to improve the implementation efficiency of DL models. It highlights the most fruitful optimizations found in existing, highly efficient implementations, and applies them in the proposed methods. A method for mapping Convolution Neural Networks (CNN) on Epiphany, a manycore architecture, is proposed and evaluated. A method for quantization and approximation for RNN models in a post-training fashion is also proposed, and evaluated on four RNN models. The proposed quantization method is used in a hardware-aware multi-objective optimization for RNN models to be deployed on SiLago and Bit fusion architectures.

  CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)