Adversarial attacks on neural networks for graph data

Daniel Zügner, Amir Akbarnejad, Stephan Günnemann

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

22 Zitate (Scopus)

Abstract

Deep learning models for graphs have achieved strong performance for the task of node classification. Despite their proliferation, currently there is no study of their robustness to adversarial attacks. Yet, in domains where they are likely to be used, e.g. the web, adversaries are common. Can deep learning models for graphs be easily fooled? In this extended abstract we summarize the key findings and contributions of our work [Zügner and Günnemann, 2019a], in which we introduce the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions. In addition to attacks at test time, we tackle the more challenging class of poisoning/causative attacks, which focus on the training phase of a machine learning model. We generate adversarial perturbations targeting the node's features and the graph structure, thus, taking the dependencies between instances in account. Moreover, we ensure that the perturbations remain unnoticeable by preserving important data characteristics. To cope with the underlying discrete domain we propose an efficient algorithm NETTACK exploiting incremental computations. Our experimental study shows that accuracy of node classification significantly drops even when performing only few perturbations. Even more, our attacks are transferable: the learned attacks generalize to other state-of-the-art node classification models and unsupervised approaches, and likewise are successful given only limited knowledge about the graph.

OriginalspracheEnglisch
TitelProceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI 2019
Redakteure/-innenSarit Kraus
Herausgeber (Verlag)International Joint Conferences on Artificial Intelligence
Seiten6246-6250
Seitenumfang5
ISBN (elektronisch)9780999241141
DOIs
PublikationsstatusVeröffentlicht - 2019
Veranstaltung28th International Joint Conference on Artificial Intelligence, IJCAI 2019 - Macao, China
Dauer: 10 Aug. 201916 Aug. 2019

Publikationsreihe

NameIJCAI International Joint Conference on Artificial Intelligence
Band2019-August
ISSN (Print)1045-0823

Konferenz

Konferenz28th International Joint Conference on Artificial Intelligence, IJCAI 2019
Land/GebietChina
OrtMacao
Zeitraum10/08/1916/08/19

Fingerprint

Untersuchen Sie die Forschungsthemen von „Adversarial attacks on neural networks for graph data“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren