Adversarial attacks on node embeddings via graph poisoning

Aleksandar Bojchevski, Stephan Günnemann

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

118 Zitate (Scopus)

Abstract

The goal of network representation learning is to learn low-dimensional node embeddings that capture the graph structure and are useful for solving downstream tasks. However, despite the proliferation of such methods, there is currently no study of their robustness to adversarial attacks. We provide the first adversarial vulnerability analysis on the widely used family of methods based on random walks. We derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks. We further show that our attacks are transferable since they generalize to many models and are successful even when the attacker is restricted.

OriginalspracheEnglisch
Titel36th International Conference on Machine Learning, ICML 2019
Herausgeber (Verlag)International Machine Learning Society (IMLS)
Seiten1112-1123
Seitenumfang12
ISBN (elektronisch)9781510886988
PublikationsstatusVeröffentlicht - 2019
Veranstaltung36th International Conference on Machine Learning, ICML 2019 - Long Beach, USA/Vereinigte Staaten
Dauer: 9 Juni 201915 Juni 2019

Publikationsreihe

Name36th International Conference on Machine Learning, ICML 2019
Band2019-June

Konferenz

Konferenz36th International Conference on Machine Learning, ICML 2019
Land/GebietUSA/Vereinigte Staaten
OrtLong Beach
Zeitraum9/06/1915/06/19

Fingerprint

Untersuchen Sie die Forschungsthemen von „Adversarial attacks on node embeddings via graph poisoning“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren