A Mixed-Signal Neuromorphic Accelerator for Energy Efficient Inference in Event-Based Neural Network Models

Armin Abdollahi1, Mehdi Kamal1, Massoud Pedram2
1University of Southern California, 2USC


Abstract

This paper introduces a comprehensive mixed-signal neuromorphic accelerator architecture to enhance inference in event-based neural network models. The accelerator is fully compatible with CMOS technology and employs analog computing techniques to replicate synaptic and neuronal operations. A C2C ladder structure implements synapses, while operational amplifiers handle neuronal functions. To optimize hardware resource utilization and power efficiency, we propose the concept of a virtual neuron. This allows a single-neuron engine to simulate a set of model neurons by capitalizing on the sparsity characteristic of event-based neuromorphic systems. Additionally, we present a memory-based control technique to effectively manage events within each network layer, thereby improving performance while offering flexibility to accommodate various types of layers. Furthermore, we introduce a novel integer linear programming (ILP)–based mapping approach for efficiently allocating the model onto the proposed accelerator. The architecture is evaluated using two event-based datasets demonstrating its scalability, including the large and complex CIFAR10-DVS. Despite the inherent complexities associated with the dataset, the architecture achieves an impressive energy efficiency of 12.1 TOPS/W. Our results show that our design maintains high accuracy using a markedly lower number of physical neurons, while the chip area—even when implemented in a 90-nm process—is smaller than that of previous implementations. Moreover, as the dataset size increases, the achievable TOPS/W performance further improves, underscoring the scalability of our approach.