DISBELIEVE: Distance Between Client Models Is Very Essential for Effective Local Model Poisoning Attacks

Indu Joshi, Priyank Upadhya, Gaurav Kumar Nayak, Peter Schüffler, Nassir Navab

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Federated learning is a promising direction to tackle the privacy issues related to sharing patients’ sensitive data. Often, federated systems in the medical image analysis domain assume that the participating local clients are honest. Several studies report mechanisms through which a set of malicious clients can be introduced that can poison the federated setup, hampering the performance of the global model. To overcome this, robust aggregation methods have been proposed that defend against those attacks. We observe that most of the state-of-the-art robust aggregation methods are heavily dependent on the distance between the parameters or gradients of malicious clients and benign clients, which makes them prone to local model poisoning attacks when the parameters or gradients of malicious and benign clients are close. Leveraging this, we introduce DISBELIEVE, a local model poisoning attack that creates malicious parameters or gradients such that their distance to benign clients’ parameters or gradients is low respectively but at the same time their adverse effect on the global model’s performance is high. Experiments on three publicly available medical image datasets demonstrate the efficacy of the proposed DISBELIEVE attack as it significantly lowers the performance of the state-of-the-art robust aggregation methods for medical image analysis. Furthermore, compared to state-of-the-art local model poisoning attacks, DISBELIEVE attack is also effective on natural images where we observe a severe drop in classification performance of the global model for multi-class classification on benchmark dataset CIFAR-10.

Original languageEnglish
Title of host publicationMedical Image Computing and Computer Assisted Intervention – MICCAI 2023 Workshops - ISIC 2023, Care-AI 2023, MedAGI 2023, DeCaF 2023, Held in Conjunction with MICCAI 2023, Proceedings
EditorsM. Emre Celebi, Md Sirajus Salekin, Hyunwoo Kim, Shadi Albarqouni
PublisherSpringer Science and Business Media Deutschland GmbH
Pages297-310
Number of pages14
ISBN (Print)9783031474002
DOIs
StatePublished - 2023
Event26th International Conference on Medical Image Computing and Computer-Assisted Intervention , MICCAI 2023 - Vancouver, Canada
Duration: 8 Oct 202312 Oct 2023

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume14393
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference26th International Conference on Medical Image Computing and Computer-Assisted Intervention , MICCAI 2023
Country/TerritoryCanada
CityVancouver
Period8/10/2312/10/23

Keywords

  • Deep Learning
  • Federated Learning
  • Model Poisoning Attacks

Fingerprint

Dive into the research topics of 'DISBELIEVE: Distance Between Client Models Is Very Essential for Effective Local Model Poisoning Attacks'. Together they form a unique fingerprint.

Cite this