GBFA: Gradual Bit-Flip Fault Attack on Graph Neural Network Accelerators

Sanaz Kazemi and Sai Manoj Pudukotai Dinakarrao
George Mason University


Abstract

Abstract—Graph neural networks (GNNs) offer a plethora of benefits across domains and are increasingly deployed in critical real-time applications. Due to the involved computations in the GNNs, they are often considered resource-hungry and computationally intensive. To meet performance and latency demands, GNN hardware accelerators have been developed recently; however, these accelerators are susceptible to bit-flip fault attacks. In this paper, we investigate the vulnerability of GNN accelerators to fault-injection attacks, in which adversaries attempt to misclassify outputs by modifying trained weights via fault injection in the memory unit of the accelerator. However, the existing attacks are na¨ ıve, require a significant amount of bit flips, and are pseud-random, making them less effective and tedious. This works introduces a novel and network behavior- aware, Gradual Bit-Flip Fault Attack (GBFA), a layer-aware bit- flip fault injection attack designed to compromise the GNNs' performance by flipping a minimal number of bits in the stored weights of message-passing layers. Our results demonstrate that GBFA can undermine GNN performance, reducing accuracy by up to 65% across datasets, architectures, and targeted layers.