BreakingBED: Breaking Binary and Efficient Deep Neural Networks by Adversarial Attacks

Manoj Rohit Vemparala, Alexander Frickenstein, Nael Fasfous, Lukas Frickenstein, Qi Zhao, Sabine Kuhn, Daniel Ehrhardt, Yuankai Wu, Christian Unger, Naveen Shankar Nagaraja, Walter Stechele

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

Deploying convolutional neural networks (CNNs) for embedded applications presents many challenges in balancing resource-efficiency and task-related accuracy. These two aspects have been well-researched in the field of CNN compression. In real-world applications, a third important aspect comes into play, namely the robustness of the CNN. In this paper, we thoroughly study the robustness of uncompressed, distilled, pruned and binarized neural networks against white-box and black-box adversarial attacks (FGSM, PGD, C&W, DeepFool, LocalSearch and GenAttack). These new insights facilitate defensive training schemes or reactive filtering methods, where the attack is detected and the input is discarded and/or cleaned. Experimental results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks (BNNs) such as XNOR-Net and ABC-Net, trained on CIFAR-10 and ImageNet datasets. We present evaluation methods to simplify the comparison between CNNs under different attack schemes using loss/accuracy levels, stress-strain graphs, box-plots and class activation mapping (CAM). Our analysis reveals susceptible behavior of uncompressed and pruned CNNs against all kinds of attacks. The distilled models exhibit their strength against all white box attacks with an exception of C&W. Furthermore, binary neural networks exhibit resilient behavior compared to their baselines and other compressed variants.

Original languageEnglish
Title of host publicationIntelligent Systems and Applications - Proceedings of the 2021 Intelligent Systems Conference IntelliSys
EditorsKohei Arai
PublisherSpringer Science and Business Media Deutschland GmbH
Pages148-167
Number of pages20
ISBN (Print)9783030821920
DOIs
StatePublished - 2022
Event Intelligent Systems Conference, IntelliSys 2021 - Virtual, Online
Duration: 2 Sep 20213 Sep 2021

Publication series

NameLecture Notes in Networks and Systems
Volume294
ISSN (Print)2367-3370
ISSN (Electronic)2367-3389

Conference

Conference Intelligent Systems Conference, IntelliSys 2021
CityVirtual, Online
Period2/09/213/09/21

Keywords

  • Adversarial attacks
  • Convolutional neural networks
  • Model compression
  • Model robustness

Fingerprint

Dive into the research topics of 'BreakingBED: Breaking Binary and Efficient Deep Neural Networks by Adversarial Attacks'. Together they form a unique fingerprint.

Cite this