DISBELIEVE: Distance Between Client Models Is Very Essential for Effective Local Model Poisoning Attacks

Indu Joshi, Priyank Upadhya, Gaurav Kumar Nayak, Peter Schüffler, Nassir Navab

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

Abstract

Federated learning is a promising direction to tackle the privacy issues related to sharing patients’ sensitive data. Often, federated systems in the medical image analysis domain assume that the participating local clients are honest. Several studies report mechanisms through which a set of malicious clients can be introduced that can poison the federated setup, hampering the performance of the global model. To overcome this, robust aggregation methods have been proposed that defend against those attacks. We observe that most of the state-of-the-art robust aggregation methods are heavily dependent on the distance between the parameters or gradients of malicious clients and benign clients, which makes them prone to local model poisoning attacks when the parameters or gradients of malicious and benign clients are close. Leveraging this, we introduce DISBELIEVE, a local model poisoning attack that creates malicious parameters or gradients such that their distance to benign clients’ parameters or gradients is low respectively but at the same time their adverse effect on the global model’s performance is high. Experiments on three publicly available medical image datasets demonstrate the efficacy of the proposed DISBELIEVE attack as it significantly lowers the performance of the state-of-the-art robust aggregation methods for medical image analysis. Furthermore, compared to state-of-the-art local model poisoning attacks, DISBELIEVE attack is also effective on natural images where we observe a severe drop in classification performance of the global model for multi-class classification on benchmark dataset CIFAR-10.

OriginalspracheEnglisch
TitelMedical Image Computing and Computer Assisted Intervention – MICCAI 2023 Workshops - ISIC 2023, Care-AI 2023, MedAGI 2023, DeCaF 2023, Held in Conjunction with MICCAI 2023, Proceedings
Redakteure/-innenM. Emre Celebi, Md Sirajus Salekin, Hyunwoo Kim, Shadi Albarqouni
Herausgeber (Verlag)Springer Science and Business Media Deutschland GmbH
Seiten297-310
Seitenumfang14
ISBN (Print)9783031474002
DOIs
PublikationsstatusVeröffentlicht - 2023
Veranstaltung26th International Conference on Medical Image Computing and Computer-Assisted Intervention , MICCAI 2023 - Vancouver, Kanada
Dauer: 8 Okt. 202312 Okt. 2023

Publikationsreihe

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Band14393
ISSN (Print)0302-9743
ISSN (elektronisch)1611-3349

Konferenz

Konferenz26th International Conference on Medical Image Computing and Computer-Assisted Intervention , MICCAI 2023
Land/GebietKanada
OrtVancouver
Zeitraum8/10/2312/10/23

Fingerprint

Untersuchen Sie die Forschungsthemen von „DISBELIEVE: Distance Between Client Models Is Very Essential for Effective Local Model Poisoning Attacks“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren