TY - GEN
T1 - Mind the Scaling Factors
T2 - 2022 Design, Automation and Test in Europe Conference and Exhibition, DATE 2022
AU - Fasfous, Nael
AU - Frickenstein, Lukas
AU - Neumeier, Michael
AU - Vemparala, Manoj Rohit
AU - Frickenstein, Alexander
AU - Valpreda, Emanuele
AU - Martina, Maurizio
AU - Stechele, Walter
N1 - Publisher Copyright:
© 2022 EDAA.
PY - 2022
Y1 - 2022
N2 - As more deep learning algorithms enter safety-critical application domains, the importance of analyzing their resilience against hardware faults cannot be overstated. Most existing works focus on bit-flips in memory, fewer focus on compute errors, and almost none study the effect of hardware faults on adversarially trained convolutional neural networks (CNNs). In this work, we show that adversarially trained CNNs are more susceptible to failure due to hardware errors when compared to vanilla-trained models. We identify large differences in the quantization scaling factors of the CNNs which are resilient to hardware faults and those which are not. As adversarially trained CNNs learn robustness against input attack perturbations, their internal weight and activation distributions open a backdoor for injecting large magnitude hardware faults. We propose a simple weight decay remedy for adversarially trained models to maintain adversarial robustness and hardware resilience in the same CNN. We improve the fault resilience of an adversarially trained ResNet56 by 25% for large-scale bit-flip benchmarks on activation data while gaining slightly improved accuracy and adversarial robustness.
AB - As more deep learning algorithms enter safety-critical application domains, the importance of analyzing their resilience against hardware faults cannot be overstated. Most existing works focus on bit-flips in memory, fewer focus on compute errors, and almost none study the effect of hardware faults on adversarially trained convolutional neural networks (CNNs). In this work, we show that adversarially trained CNNs are more susceptible to failure due to hardware errors when compared to vanilla-trained models. We identify large differences in the quantization scaling factors of the CNNs which are resilient to hardware faults and those which are not. As adversarially trained CNNs learn robustness against input attack perturbations, their internal weight and activation distributions open a backdoor for injecting large magnitude hardware faults. We propose a simple weight decay remedy for adversarially trained models to maintain adversarial robustness and hardware resilience in the same CNN. We improve the fault resilience of an adversarially trained ResNet56 by 25% for large-scale bit-flip benchmarks on activation data while gaining slightly improved accuracy and adversarial robustness.
UR - http://www.scopus.com/inward/record.url?scp=85130809341&partnerID=8YFLogxK
U2 - 10.23919/DATE54114.2022.9774686
DO - 10.23919/DATE54114.2022.9774686
M3 - Conference contribution
AN - SCOPUS:85130809341
T3 - Proceedings of the 2022 Design, Automation and Test in Europe Conference and Exhibition, DATE 2022
SP - 706
EP - 711
BT - Proceedings of the 2022 Design, Automation and Test in Europe Conference and Exhibition, DATE 2022
A2 - Bolchini, Cristiana
A2 - Verbauwhede, Ingrid
A2 - Vatajelu, Ioana
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 14 March 2022 through 23 March 2022
ER -