FedCode: Communication-Efficient Federated Learning via Transferring Codebooks

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

2 Citations (Scopus)
25 Downloads (Pure)

Abstract

Federated Learning (FL) is a distributed machine learning paradigm that enables learning models from decentralized local data, offering significant benefits for clients’ data privacy. Despite its appealing privacy properties, FL faces the challenge of high communication burdens, necessitated by the continuous exchange of model weights between the server and clients. To mitigate these issues, existing communication-efficient FL approaches employ model compression techniques, such as pruning and weight clustering; yet, the need to transmit the entire set of weight updates at each federated round even in a compressed format - limits the potential for a substantial reduction in communication volume. In response, we propose FedCode, a novel FL training regime directly utilizing codebooks, i.e., the cluster centers of updated model weight values, to significantly reduce the bidirectional communication load, all while minimizing computational overhead and preventing substantial degradation in model performance. To ensure a smooth learning curve and proper calibration of clusters between the server and clients through the periodic transfer of compressed model weights, following multiple rounds of exclusive codebook communication. Our comprehensive evaluations across various publicly available vision and audio datasets on diverse neural architectures demonstrate that FedCode achieves a 12.4 -fold reduction in data transmission on average, while maintaining models’ performance on par with FedAvg, incurring a mere average accuracy loss of just 1.65 % .
Original languageEnglish
Title of host publication2024 IEEE International Conference on Edge Computing and Communications, IEEE EDGE 2024
EditorsRong N. Chang, Carl K. Chang, Jingwei Yang, Zhi Jin, Michael Sheng, Jing Fan, Kenneth K. Fletcher, Qiang He, Nimanthi Atukorala, Hongyue Wu, Shiqiang Wang, Shuiguang Deng, Nirmit Desai, Gopal Pingali, Javid Taheri, K. V. Subramaniam, Feras Awaysheh, Kaouta El Maghaouri, Yingjie Wang
PublisherInstitute of Electrical and Electronics Engineers
Pages99-109
Number of pages11
ISBN (Electronic)979-8-3503-6849-9
DOIs
Publication statusPublished - 28 Aug 2024
Event2024 IEEE International Conference
On Edge Computing & Communications, IEEE EDGE 2024
- Shenzhen, China
Duration: 7 Jul 202413 Jul 2024

Conference

Conference2024 IEEE International Conference
On Edge Computing & Communications, IEEE EDGE 2024
Abbreviated titleIEEE EDGE 2024
Country/TerritoryChina
CityShenzhen
Period7/07/2413/07/24

Funding

This research is partially funded by the DAIS project, which has received funding from KDTJU under grant agreement No 101007273.

Keywords

  • Federated learning
  • communication efficiency
  • weight clustering
  • model compression
  • codebook transfer

Fingerprint

Dive into the research topics of 'FedCode: Communication-Efficient Federated Learning via Transferring Codebooks'. Together they form a unique fingerprint.

Cite this