Hardware Support for Trustworthy Machine Learning: A Survey

Md Shohidul Islam1, Ihsen Alouani2, Khaled N. Khasawneh1
1George Mason University, 2CSIT, Queen's University Belfast, UK


Abstract

Machine Learning (ML) are used in an increasing number of applications as they continue to deliver state-of-the-art performance across many areas including computer vision natural language processing (NLP), robotics, autonomous driving, and healthcare. While rapid progress in all aspects of ML development and deployment is occurring, there is a rising concern about the trustworthiness of these models, especially from security and privacy perspectives. Several attacks that jeopardize ML models' integrity (e.g. adversarial attacks) and confidentiality (e.g. membership inference attacks) have been investigated in the literature. This, in return, triggered substantial work to protect ML models and advance their trustworthiness. Defenses generally act on the input data, the objective function, or the network structure to mitigate adversarial effects. However, these proposed defenses require substantial changes to the architecture, retraining procedure, or incorporate additional input data processing overheads. In addition, often these proposed defenses require high power and computational requirements, which make them challenging to deploy in embedded systems and Edge devices. Towards addressing the need for robust ML at acceptable overheads, recent works have investigated hardware-emanated solutions to enhance ML security and privacy. In this paper, we summarize recent works in the area of hardware support for trustworthy ML. In addition, we provide guidelines for future research in the area by identifying open problems that need to be addressed.