Adversarial interference and its mitigations in privacy-preserving collaborative machine learning

Dmitrii Usynin, Alexander Ziller, Marcus Makowski, Rickmer Braren, Daniel Rueckert, Ben Glocker, Georgios Kaissis, Jonathan Passerat-Palmbach

Research output: Contribution to journalReview articlepeer-review

41 Scopus citations

Abstract

Despite the rapid increase of data available to train machine-learning algorithms in many domains, several applications suffer from a paucity of representative and diverse data. The medical and financial sectors are, for example, constrained by legal, ethical, regulatory and privacy concerns preventing data sharing between institutions. Collaborative learning systems, such as federated learning, are designed to circumvent such restrictions and provide a privacy-preserving alternative by eschewing data sharing and relying instead on the distributed remote execution of algorithms. However, such systems are susceptible to malicious adversarial interference attempting to undermine their utility or divulge confidential information. Here we present an overview and analysis of current adversarial attacks and their mitigations in the context of collaborative machine learning. We discuss the applicability of attack vectors to specific learning contexts and attempt to formulate a generic foundation for adversarial influence and mitigation mechanisms. We moreover show that a number of context-specific learning conditions are exploited in similar fashion across all settings. Lastly, we provide a focused perspective on open challenges and promising areas of future research in the field.

Original languageEnglish
Pages (from-to)749-758
Number of pages10
JournalNature Machine Intelligence
Volume3
Issue number9
DOIs
StatePublished - Sep 2021

Fingerprint

Dive into the research topics of 'Adversarial interference and its mitigations in privacy-preserving collaborative machine learning'. Together they form a unique fingerprint.

Cite this