Model Extraction Attack against On-device Deep Learning with Power Side Channel

Jialin Liu and han wang
Temple University


Abstract

The proliferation of on-device deep learning models in resource-constrained environments has led to significant advancements in privacy-preserving machine learning. However, the deployment of these models also introduces new security challenges, one of which is the vulnerability to model extraction attacks. In this paper, we investigate a novel attack with power side channel to extract on-device deep learning model deployed, which poses a substantial threat to on-device deep learning systems. By carefully monitoring power consumption during inference, an adversary can gain insights into the model's internal behavior, potentially compromising the model's intellectual property and sensitive data. Through experiments on a real-world embedded device (Jetson Nano) and various types of deep learning models, we demonstrate that the proposed attack can extract models with high fidelity. Based on experiments, we find that the power side channel-assisted model extraction attack can achive high attacking success rate, up to 96.7% and 87.5% under close world and open world settings. This research sheds light on the evolving landscape of security threats in the context of on-device deep learning and provides valuable insights into safeguarding these models from potential adversaries.