Unleashing Energy-Efficiency: Neural Architecture Search without Training for Spiking Neural Networks on Loihi Chip

Shiya Liu and Yang Yi
Virginia Tech


Abstract

Spiking neural networks (SNNs) offer energy-efficient computation due to their high-sparsity activation and event-driven nature. However, existing SNN designs often utilize suboptimal artificial neural network (ANN)-like architectures for binary sequence processing. Moreover, improving accuracy often leads to higher computational complexity, making it difficult to deploy SNNs on resource-constrained devices. Furthermore, SNN architectures tailored for GPUs may not fully exploit the energy-efficient capabilities of SNN models. To address these limitations, we present a novel neural architecture search (NAS) algorithm that merges recent advancements in ANNs and focuses on enhancing SNN architectures specifically for the Loihi chip. The Loihi chip is a neuromorphic computing chip designed to emulate the brain's neural networks, with particular strength in event-driven SNNs, making it an energy-efficient alternative to GPUs. Our algorithm efficiently selects an optimal architecture by leveraging gradients induced at initialization across diverse data samples, eliminating the requirement for training. We propose to design a search space that aligns with the chip's capabilities, taking into account its support for integer-only inference and the lack of advanced operators such as backward and shortcut connections. Experimental results on two image classification benchmarks demonstrate the superiority of our SNN models, which achieve comparable accuracy to state-of-the-art architectures while significantly reducing energy consumption per image and minimizing model size. Our approach paves the way for energy-efficient SNN designs on the Loihi chip, unlocking the full potential of SNNs for real-world applications.