Enabling Lightweight and Efficient Edge Inference for Identifying Radio Modulation

Kang Jun Bai1 and Michelle Jiang2
1Air Force Research Laboratory, 2Carnegie Mellon University


Abstract

Edge inference, particularly for AI-driven mobile applications, faces significant hurdles due to the resource constraints of internet-of-things (IoT). Deploying large and complex deep neural networks (DNNs) in such systems leads to high power consumption, increased latency, and memory limitations, hindering real-time performance and scalability. To address these challenges, this paper presents two approaches to optimize DNNs for efficient edge computing in IoT, focusing on radio modulation identification. First, a lightweight convolutional neural network (CNN) is tailored to simplify the inference model and reduce resource constraints in edge-enabled devices. Specifically, by linearizing pooling layers as well as integrating convolutional and pooling operations into a single matrix suitable for hardware implementation, our lightweight CNN achieves significant parameter reduction and faster inference. Secondly, an echo state network (ESN), a form of reservoir computing, is introduced to address the limitations of online learning when processing dynamic data streams. To be specific, by utilizing an untrained reservoir layer for high-dimensional input encoding, ESN reduces computational complexity and energy use when solely training its readout layer. The advantages of ESN over alternative methods are highlighted, paving the way for efficient radio modulation identification systems at the edge by avoiding data scarcity and the vanishing gradient problem.