ResFed: Communication0Efficient Federated Learning with Deep Compressed Residuals

Rui Song, Liguo Zhou, Lingjuan Lyu, Andreas Festag, Alois Knoll

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

Federated learning allows for cooperative training among distributed clients by sharing their locally learned model parameters, such as weights or gradients. However, as model size increases, the communication bandwidth required for deployment in wireless networks becomes a bottleneck. To address this, we propose a residual-based federated learning framework (ResFed) that transmits residuals instead of gradients or weights in networks. By predicting model updates at both clients and the server, residuals are calculated as the difference between updated and predicted models and contain more dense information than weights or gradients. We find that the residuals are less sensitive to an increasing compression ratio than other parameters, and hence use lossy compression techniques on residuals to improve communication efficiency for training in federated settings. With the same compression ratio, ResFed outperforms current methods (weight- or gradient-based federated learning) by over 1.4× on federated data sets, including MNIST, FashionMNIST, SVHN, CIFAR-10, CIFAR-100, and FEMNIST, in client-to-server communication, and can also be applied to reduce communication costs for server-to-client communication.

Original languageEnglish
Pages (from-to)9458-9472
Number of pages15
JournalIEEE Internet of Things Journal
Volume11
Issue number6
DOIs
StatePublished - 15 Mar 2024

Keywords

  • Communication efficiency
  • deep compression
  • federated learning
  • protocol design

Fingerprint

Dive into the research topics of 'ResFed: Communication0Efficient Federated Learning with Deep Compressed Residuals'. Together they form a unique fingerprint.

Cite this