Certifiable robustness and robust training for graph convolutional networks

Daniel Zügner, Stephan Günnemann

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

109 Scopus citations

Abstract

Recent works show that Graph Neural Networks (GNNs) are highly non-robust with respect to adversarial attacks on both the graph structure and the node attributes, making their outcomes unreliable. We propose the first method for certifiable (non-)robustness of graph convolutional networks with respect to perturbations of the node attributes1. We consider the case of binary node attributes (e.g. bag-of-words) and perturbations that are L0-bounded. If a node has been certified with our method, it is guaranteed to be robust under any possible perturbation given the attack model. Likewise, we can certify non-robustness. Finally, we propose a robust semi-supervised training procedure that treats the labeled and unlabeled nodes jointly. As shown in our experimental evaluation, our method significantly improves the robustness of the GNN with only minimal effect on the predictive accuracy.

Original languageEnglish
Title of host publicationKDD 2019 - Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
PublisherAssociation for Computing Machinery
Pages246-256
Number of pages11
ISBN (Electronic)9781450362016
DOIs
StatePublished - 25 Jul 2019
Event25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2019 - Anchorage, United States
Duration: 4 Aug 20198 Aug 2019

Publication series

NameProceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining

Conference

Conference25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2019
Country/TerritoryUnited States
CityAnchorage
Period4/08/198/08/19

Fingerprint

Dive into the research topics of 'Certifiable robustness and robust training for graph convolutional networks'. Together they form a unique fingerprint.

Cite this