Differentially Private Federated Learning: Privacy and Utility Analysis of Output Perturbation and DP-SGD

Anastasia Pustozerova, Jan Baumbach, Rudolf Mayer

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Federated Learning (FL) is a method that allows multiple entities to jointly train a machine learning model using data located in various places. Unlike the conventional approach of gathering private data from distributed locations to a central place, federated learning involves solely exchanging and aggregating the machine learning models. Each party shares only a machine learning model trained locally on their private data, ensuring that the sensitive data remains within the respective silos throughout the process. However, these shared models in FL may still leak sensitive information about the training data in the form of e.g. membership disclosure. To mitigate these residual privacy risks in federated learning, one has to use additional defence techniques such as Differential Privacy (DP), which introduces noise into the training data or the model. Differential Privacy provides a mathematical definition of privacy and can be applied in machine learning via different perturbation mechanisms. This work focuses on the analysis of Differential Privacy in federated learning through (i) output perturbation of the trained machine learning models and (ii) a differentially-private form of stochastic gradient descent (DP-SGD). We consider these two approaches in various settings and analyse their performance in terms of model utility and achieved privacy. To evaluate a model's privacy risk, we empirically measure the success rate of a membership inference attack. We observe that DP-SGD allows for a better trade-off between privacy and utility in most of the considered settings. In some settings, however, output perturbation can provide a better or similar privacy-utility trade-off and at the same time better communication and computational efficiency.

Original languageEnglish
Title of host publicationProceedings - 2023 IEEE International Conference on Big Data, BigData 2023
EditorsJingrui He, Themis Palpanas, Xiaohua Hu, Alfredo Cuzzocrea, Dejing Dou, Dominik Slezak, Wei Wang, Aleksandra Gruca, Jerry Chun-Wei Lin, Rakesh Agrawal
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages5549-5558
Number of pages10
ISBN (Electronic)9798350324457
DOIs
StatePublished - 2023
Externally publishedYes
Event2023 IEEE International Conference on Big Data, BigData 2023 - Sorrento, Italy
Duration: 15 Dec 202318 Dec 2023

Publication series

NameProceedings - 2023 IEEE International Conference on Big Data, BigData 2023

Conference

Conference2023 IEEE International Conference on Big Data, BigData 2023
Country/TerritoryItaly
CitySorrento
Period15/12/2318/12/23

Keywords

  • DP-SGD
  • Differential Privacy
  • Federated Learning
  • Output Perturbation

Fingerprint

Dive into the research topics of 'Differentially Private Federated Learning: Privacy and Utility Analysis of Output Perturbation and DP-SGD'. Together they form a unique fingerprint.

Cite this