Graph Neural Networks: Adversarial Robustness

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

32 Scopus citations

Abstract

Graph neural networks have achieved impressive results in various graph learning tasks and they have found their way into many applications such as molecular property prediction, cancer classification, fraud detection, or knowledge graph reasoning. With the increasing number of GNN models deployed in scientific applications, safety-critical environments, or decision-making contexts involving humans, it is crucial to ensure their reliability. In this chapter, we provide an overview of the current research on adversarial robustness of GNNs.We introduce the unique challenges and opportunities that come along with the graph setting and give an overview of works showing the limitations of classic GNNs via adversarial example generation. Building upon these insights we introduce and categorize methods that provide provable robustness guarantees for graph neural networks as well as principles for improving robustness of GNNs. We conclude with a discussion of proper evaluation practices taking robustness into account.

Original languageEnglish
Title of host publicationGraph Neural Networks
Subtitle of host publicationFoundations, Frontiers, and Applications
PublisherSpringer Nature
Pages149-176
Number of pages28
ISBN (Electronic)9789811660542
ISBN (Print)9789811660535
DOIs
StatePublished - 1 Jan 2022

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 3 - Good Health and Well-being
    SDG 3 Good Health and Well-being

Fingerprint

Dive into the research topics of 'Graph Neural Networks: Adversarial Robustness'. Together they form a unique fingerprint.

Cite this