TY - GEN
T1 - BreakingBED
T2 - Intelligent Systems Conference, IntelliSys 2021
AU - Vemparala, Manoj Rohit
AU - Frickenstein, Alexander
AU - Fasfous, Nael
AU - Frickenstein, Lukas
AU - Zhao, Qi
AU - Kuhn, Sabine
AU - Ehrhardt, Daniel
AU - Wu, Yuankai
AU - Unger, Christian
AU - Nagaraja, Naveen Shankar
AU - Stechele, Walter
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - Deploying convolutional neural networks (CNNs) for embedded applications presents many challenges in balancing resource-efficiency and task-related accuracy. These two aspects have been well-researched in the field of CNN compression. In real-world applications, a third important aspect comes into play, namely the robustness of the CNN. In this paper, we thoroughly study the robustness of uncompressed, distilled, pruned and binarized neural networks against white-box and black-box adversarial attacks (FGSM, PGD, C&W, DeepFool, LocalSearch and GenAttack). These new insights facilitate defensive training schemes or reactive filtering methods, where the attack is detected and the input is discarded and/or cleaned. Experimental results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks (BNNs) such as XNOR-Net and ABC-Net, trained on CIFAR-10 and ImageNet datasets. We present evaluation methods to simplify the comparison between CNNs under different attack schemes using loss/accuracy levels, stress-strain graphs, box-plots and class activation mapping (CAM). Our analysis reveals susceptible behavior of uncompressed and pruned CNNs against all kinds of attacks. The distilled models exhibit their strength against all white box attacks with an exception of C&W. Furthermore, binary neural networks exhibit resilient behavior compared to their baselines and other compressed variants.
AB - Deploying convolutional neural networks (CNNs) for embedded applications presents many challenges in balancing resource-efficiency and task-related accuracy. These two aspects have been well-researched in the field of CNN compression. In real-world applications, a third important aspect comes into play, namely the robustness of the CNN. In this paper, we thoroughly study the robustness of uncompressed, distilled, pruned and binarized neural networks against white-box and black-box adversarial attacks (FGSM, PGD, C&W, DeepFool, LocalSearch and GenAttack). These new insights facilitate defensive training schemes or reactive filtering methods, where the attack is detected and the input is discarded and/or cleaned. Experimental results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks (BNNs) such as XNOR-Net and ABC-Net, trained on CIFAR-10 and ImageNet datasets. We present evaluation methods to simplify the comparison between CNNs under different attack schemes using loss/accuracy levels, stress-strain graphs, box-plots and class activation mapping (CAM). Our analysis reveals susceptible behavior of uncompressed and pruned CNNs against all kinds of attacks. The distilled models exhibit their strength against all white box attacks with an exception of C&W. Furthermore, binary neural networks exhibit resilient behavior compared to their baselines and other compressed variants.
KW - Adversarial attacks
KW - Convolutional neural networks
KW - Model compression
KW - Model robustness
UR - http://www.scopus.com/inward/record.url?scp=85113207727&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-82193-7_10
DO - 10.1007/978-3-030-82193-7_10
M3 - Conference contribution
AN - SCOPUS:85113207727
SN - 9783030821920
T3 - Lecture Notes in Networks and Systems
SP - 148
EP - 167
BT - Intelligent Systems and Applications - Proceedings of the 2021 Intelligent Systems Conference IntelliSys
A2 - Arai, Kohei
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 2 September 2021 through 3 September 2021
ER -