Training Once, Deploy Anywhere: Defect-Robust Memristor Accelerators without Re-Training

Rejina Maharjan and Chao Lu
Southern Illinois University Carbondale


Abstract

Memristor accelerators offer an energy-efficient platform for neuromorphic computation, yet their reliability is compromised by device-level defects such as stuck memristor cells. These defects severely degrade inference accuracy when pre-trained models are mapped onto memristor crossbar arrays. Existing hardware-aware or defect-map-based training approaches often assume known defect locations or require on-chip re-training, limiting scalability and practicality. This paper proposes a Defect-Robust Training (DRT) framework that enables train-once, deploy-anywhere operation of memristor accelerators without post-fabrication calibration. DRT employs stochastic defect masks that randomly freeze subsets of weights in training epochs, encouraging neural networks to learn stuck-tolerant representations. Evaluations on LeNet-5 (MNIST) and VGG-16 (CIFAR-10) demonstrate that DRT improves defect tolerance and achieves substantially higher accuracy compared to existing approaches. By unifying defect modeling with hardware constraints, the proposed design enables scalable, post-fabrication–agnostic deployment of reliable memristor neuromorphic accelerators—training once and deploying anywhere.