Efficient Processing and Storage for Massive MIMO Digital Baseband

Abstract: Driven by the increasing demands on data rate from applications, the wireless communication standard has for decades been evolving approximately at a pace of one generation per ten years. Following this trend, the ambitious plan to replace the current cellular mobile network standard (4G) with the next generation standard (5G) is going through the standardization phase and is getting close to its actual deployment. Promised benefits associated with 5G include higher data rates, lower latency, higher reliability, more connected users, etc. One of the candidate technologies to enable these benefits is based on massive Multiple-Input and Multiple-Output (MIMO). Massive MIMO systems have base stations equipped with a large number of antennas (hundreds or even more) serving multiple users simultaneously in the same time and frequency resource. It provides higher spectral efficiency and transmitted energy efficiency due to the large spatial multiplexing and antenna array gain. However, massive MIMO requires simultaneous processing of signals for all the antenna chains and real-time computations involving large-size matrices. Compared to today’s small-scale MIMO, the corresponding computational complexity can be orders of magnitude higher, which inevitability leads to higher area cost and processing energy consumption. Besides the inherently higher computational complexity, massive MIMO also introduces new design challenges for the data storage system. For example, the number of elements in the Channel State Information (CSI) matrix can increase by hundreds of times. Moreover, to support high-throughput matrix operations, the gap between computational capacity and the memory bandwidth must be bridged. Sophisticated baseband processing algorithms demand complicated data access modes and thus call for smart data storage solutions.This thesis focuses on two important topics in digital baseband processing: energy-efficient computing and organization of large matrices. System algorithm-circuit co-optimization is explored to meet the real-time computational requirements. In the first topic, the concept of adaptive energy-quality scalable circuit is studied to trade between Quality of Service (QoS) and energy consumption. At circuit design level, a multiplier supporting three wordlengths is designed to provide run-time processing precision adjustment. At system and algorithm level, the concept of algorithm switching is investigated. A resource scheduling scheme to switch between accurate and approximative algorithms is developed to exploit the dynamics in the wireless channel. As shown in a case study, 58% energy can be saved by applying this method when implementing on a QR-decomposition processor. In terms of data organization, the concept of parallel memories is applied to provide low-latency, high-bandwidth, and highly flexible data access for massive MIMO baseband processing. On top of this, on-chip channel data compression methods are proposed, which utilize the inherent sparsity in massive MIMO channel. As a case study, the presented algorithms are capable of saving about 75% of storage requirement for a 128-antenna system with less than 0.8dB loss in performance. Based on the channel compression concept and various access patterns supplied by parallel memories, a heterogeneous memory system is designed and implemented (layout) using ST 28nm Fully Depleted Silicon On Insulator (FD-SOI). The area cost is 0.47mm2, which is 58% smaller than a memory system with the same capacity and without compression. The energy-efficient computing and data organization of large matrices provides a promising methodology for the actual deployment of massive MIMO baseband processor.

  CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)