Robustness verification of ReLU networks via quadratic programming

Aleksei Kuvshinov, Stephan Günnemann

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Neural networks are known to be sensitive to adversarial perturbations. To investigate this undesired behavior we consider the problem of computing the distance to the decision boundary (DtDB) from a given sample for a deep neural net classifier. In this work we present a procedure where we solve a convex quadratic programming (QP) task to obtain a lower bound on the DtDB. This bound is used as a robustness certificate of the classifier around a given sample. We show that our approach provides better or competitive results in comparison with a wide range of existing techniques.

Original languageEnglish
Pages (from-to)2407-2433
Number of pages27
JournalMachine Learning
Volume111
Issue number7
DOIs
StatePublished - Jul 2022

Keywords

  • Machine learning
  • Minimal adversarial perturbation
  • Neural networks
  • Quadratic programming
  • Robustness verification

Fingerprint

Dive into the research topics of 'Robustness verification of ReLU networks via quadratic programming'. Together they form a unique fingerprint.

Cite this