Realizing Next-Generation Data Centers via Software-Defined “Hardware” Infrastructures and Resource Disaggregation : Exploiting your cache
Abstract: The cloud is evolving due to additional demands introduced by new technological advancements and the wide movement toward digitalization. Moreover, next-generation Data Centers (DCs) and clouds are expected (and need) to become cheaper, more efficient, and capable of offering more predictable services. Aligned with this, this thesis examines the concept of Software-Defined “Hardware” Infrastructure (SDHI) based on hardware resource disaggregation as one possible way of realizing next-generation DCs. This thesis starts with an overview of the functional architecture of a cloud based on SDHI. Following this, a series of use-cases and deployment scenarios enabled by SDHI are discussed along with an exploration of the role of each functional block of SDHI’s architecture, i.e., cloud infrastructure, cloud platforms, cloud execution environments, and applications.This thesis proposes a framework to evaluate the impact of SDHI on the techno-economic efficiency of DCs, explicitly focusing on application profiling, hardware dimensioning, and Total Cost of Ownership (TCO). It then shows that combining resource disaggregation and software-defined capabilities makes DCs less expensive and easier to expand; hence, they can rapidly follow the expected exponential demand growth. Additionally, this thesis elaborates the technologies underlying SDHI, its challenges, and its potential future directions.It is advocated that achieving and maintaining a high level of memory performance is crucial for realizing SDHI & disaggregated DC. Nevertheless, a memory management and Input/Output (I/O) data management scheme suitable for SDHI is proposed and its advantages are shown. This work focuses on the management of Last Level Cache (LLC) in currently available Intel processors, takes advantage of LLC’s Non-Uniform Cache Architectures (NUCA), and investigates how better utilization of LLC can provide higher performance, more predictable response time, and improved isolation between threads. Additionally, this thesis scrutinizes the impact of cache management, specifically Direct Cache Access (DCA), on the performance of I/O intensive applications. The results of an empirical study shows that the proposed memory management scheme enables system designers and developers to optimize systems for I/O intensive applications and highlights some potential changes expected for I/O management in future DC systems.
CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)