Distributed and federated learning of support vector machines and applications

Abstract: Machine Learning (ML) has achieved remarkable success in solving classification, regression, and related problems over the past decade. In particular the exponential growth of digital data, makes using ML inevitable and necessary to exploit the wealth of information hidden inside the data. However, this posits new challenges for traditional serial learning methods concerning scalability as the learning process becomes slow, if it is feasible at all. Distributed and parallel learning are natural approaches for improving the performance of ML algorithms in terms of running time. Support Vector Machines (SVM) are successful and popular supervised machine learning models that enjoy the availability of parallel and distributed computational approaches to tackle the challenges arising from big data. However, efficient parallel and distributed implementation of SVMs is a complex endeavour. To make matters worse, distributed learning of SVMs may impose privacy concerns, particularly with models built from data distributed across multiple nodes where due to confidentiality of data, intellectual property interest, or other reasons, the information cannot be easily shared. Despite the large amounts of prior work, the problems regarding learning SVMs with respect to big data are not fully solved. Therefore in this dissertation, we try to shed light on efficient parallel SVM implementations and on how they can be further improved. This dissertation is based on a collection of publications. The research presented here addresses some problems in parallel and distributed computing of SVMs through four research questions. The key contribution is to provide answers to these research questions through five research articles. For the first research question, we explore available parallel approaches for learning SVMs for large-scale problems. We investigate important factors such as algorithmic approaches, HPC tools, strategies and heuristics used for effective parallel SVM implementations. All of these helped identifying the state-of-the-art parallel SVMs, their pros and cons, and provide suggestions for potential avenues for future studies. We conclude that it is the responsibility of the user to make judicious choices to balance the trade-offs. To address the second research question, we explore the impact of changes in the problem size and the important SVM parameters that lead to significant performance improvements. It turns out that the problem size has impact on the optimal choice of the important SVM parameters. In addition, we show the existence of a threshold on the number of cores after which the training time does not improve. The third research question investigates the effect of network topology on the performance of network-based distributed SVMs in terms of convergence. The three key contributions are to 1) show the effect of the expansion property and the connectivity of the underlying communication network on the convergence of the algorithm, 2) present a preferable network topology, and 3) supply an implementation that makes the theoretical advances practically available. The results I suggest that the graphs with higher node degrees and larger spectral gaps, thus higher connectivity, exhibit accelerated convergence. The fourth research question investigates federated learning of SVMs in which the data privacy is protected under limited and private communication between agents. The key contribution is incorporating differential privacy during the learning procedure. The results show that the learning process 1) respects data privacy, 2) achieves accuracy comparable to the algorithm without considering privacy-preserving, and 3) yields tight empirical guarantees for privacy after convergence. Finally, we combine all the contributions in the dissertation and offer recommendations for implementing an efficient privacy-preserving framework for parallel computing of SVMs for large-scale problems.

  CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)