Abstract
Graph neural networks have achieved impressive results in various graph learning tasks and they have found their way into many applications such as molecular property prediction, cancer classification, fraud detection, or knowledge graph reasoning. With the increasing number of GNN models deployed in scientific applications, safety-critical environments, or decision-making contexts involving humans, it is crucial to ensure their reliability. In this chapter, we provide an overview of the current research on adversarial robustness of GNNs.We introduce the unique challenges and opportunities that come along with the graph setting and give an overview of works showing the limitations of classic GNNs via adversarial example generation. Building upon these insights we introduce and categorize methods that provide provable robustness guarantees for graph neural networks as well as principles for improving robustness of GNNs. We conclude with a discussion of proper evaluation practices taking robustness into account.
Originalsprache | Englisch |
---|---|
Titel | Graph Neural Networks |
Untertitel | Foundations, Frontiers, and Applications |
Herausgeber (Verlag) | Springer Nature |
Seiten | 149-176 |
Seitenumfang | 28 |
ISBN (elektronisch) | 9789811660542 |
ISBN (Print) | 9789811660535 |
DOIs | |
Publikationsstatus | Veröffentlicht - 1 Jan. 2022 |