Event-Driven Architectures for Heterogeneous Neuromorphic Computing Systems
Abstract: Mixed-signal neuromorphic processors have brain-like organization and device physics optimized for emulation of spiking neural networks (SNNs), and offer an energy-efficient alternative for implementing artificial intelligence in applications where deep learning based on conventional digital computing is unfeasible or unsustainable. However, efficient use of such hardware requires appropriate configuration of its inhomogeneous, analog neurosynaptic circuits, with methods for sparse, spike-timing-based information encoding and processing. Furthermore, as neuromorphic processors are event-driven and asynchronous with massively parallel dynamic processing and colocated memory, they differ fundamentally from conventional von Neumann computers. Therefore, there is a need to investigate programming approaches and learning mechanisms for efficient use of neuromorphic processors, as well as abstractions required for large-scale integration of such devices into the present computational infrastructure of distributed digital systems. In this thesis, a disynaptic excitatory–inhibitory (E–I) element for resource-efficient generation of synaptic delay dynamics for spike-timing-based computation in neuromorphic hardware is proposed. Chip-in-the-loop experiments with a DYNAP-SE neuromorphic processor and SNN simulations are presented, demonstrating how such E–I elements leverage hardware inhomogeneity for representational variance and feature tuning for time-dependent pattern recognition. Using the E–I elements, spatiotemporal receptive fields with up to five dimensions per hardware neuron were characterized, for instance in a modified Spatiotemporal Correlator (STC) network and in an insect-inspired SNN. The energy dissipation of the proposed E–I element is one order of magnitude lower per lateral connection (0.65 vs. 9.6 nJ per spike) than the original delay-based hardware implementation of the STC. Thus, it is shown how the analog synaptic circuits could be used for efficient implementation of STC network layers, in a way that enables digital synapse-address reprogramming as an observable and reproducible mechanism for feature tuning in SNN layers. This approach may serve as a complement to more accurate but resource-intensive delay-based SNNs, as it offers a digital network-state representation and adaptation concept that can fully benefit from the inhomogeneous neurosynaptic dynamics in the inference stage. Furthermore, a microservice-based conceptual framework for neuromorphic systems integration is proposed. The framework consists of a neuromorphic-system proxy that provides virtualization and communication capabilities required in distributed settings, combined with a declarative programming approach offering engineering-process abstraction. By combining several well-established concepts from different domains of computer science, this work addresses the gap between the state of the art in digitization and neuromorphic computing software development.
This dissertation MIGHT be available in PDF-format. Check this page to see if it is available for download.