Adversarial attacks on graph neural networks via meta learning

Daniel Zügner, Stephan Günnemann

Publikation: KonferenzbeitragPapierBegutachtung

349 Zitate (Scopus)

Abstract

Deep learning models for graphs have advanced the state of the art on many tasks. Despite their recent success, little is known about their robustness. We investigate training time attacks on graph neural networks for node classification that perturb the discrete graph structure. Our core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks, essentially treating the graph as a hyperparameter to optimize. Our experiments show that small graph perturbations consistently lead to a strong decrease in performance for graph convolutional networks, and even transfer to unsupervised embeddings. Remarkably, the perturbations created by our algorithm can misguide the graph neural networks such that they perform worse than a simple baseline that ignores all relational information. Our attacks do not assume any knowledge about or access to the target classifiers.

OriginalspracheEnglisch
PublikationsstatusVeröffentlicht - 2019
Veranstaltung7th International Conference on Learning Representations, ICLR 2019 - New Orleans, USA/Vereinigte Staaten
Dauer: 6 Mai 20199 Mai 2019

Konferenz

Konferenz7th International Conference on Learning Representations, ICLR 2019
Land/GebietUSA/Vereinigte Staaten
OrtNew Orleans
Zeitraum6/05/199/05/19

Fingerprint

Untersuchen Sie die Forschungsthemen von „Adversarial attacks on graph neural networks via meta learning“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren