An Efficient Federated Learning Framework for Training Semantic
Communication System
- URL: http://arxiv.org/abs/2310.13236v2
- Date: Thu, 9 Nov 2023 10:52:14 GMT
- Title: An Efficient Federated Learning Framework for Training Semantic
Communication System
- Authors: Loc X. Nguyen, Huy Q. Le, Ye Lin Tun, Pyae Sone Aung, Yan Kyaw Tun,
Zhu Han, Choong Seon Hong
- Abstract summary: Most semantic communication systems are built upon advanced deep learning models.
Due to privacy and security concerns, the transmission of data is restricted.
We introduce a mechanism to aggregate the global model from clients, called FedLol.
- Score: 29.593406320684448
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic communication has emerged as a pillar for the next generation of
communication systems due to its capabilities in alleviating data redundancy.
Most semantic communication systems are built upon advanced deep learning
models whose training performance heavily relies on data availability. Existing
studies often make unrealistic assumptions of a readily accessible data source,
where in practice, data is mainly created on the client side. Due to privacy
and security concerns, the transmission of data is restricted, which is
necessary for conventional centralized training schemes. To address this
challenge, we explore semantic communication in a federated learning (FL)
setting that utilizes client data without leaking privacy. Additionally, we
design our system to tackle the communication overhead by reducing the quantity
of information delivered in each global round. In this way, we can save
significant bandwidth for resource-limited devices and reduce overall network
traffic. Finally, we introduce a mechanism to aggregate the global model from
clients, called FedLol. Extensive simulation results demonstrate the
effectiveness of our proposed technique compared to baseline methods.
Related papers
- Communication-Efficient Federated Learning through Adaptive Weight
Clustering and Server-Side Distillation [10.541541376305245]
Federated Learning (FL) is a promising technique for the collaborative training of deep neural networks across multiple devices.
FL is hindered by excessive communication costs due to repeated server-client communication during training.
We propose FedCompress, a novel approach that combines dynamic weight clustering and server-side knowledge distillation.
arXiv Detail & Related papers (2024-01-25T14:49:15Z) - Coordination-free Decentralised Federated Learning on Complex Networks:
Overcoming Heterogeneity [2.6849848612544]
Federated Learning (FL) is a framework for performing a learning task in an edge computing scenario.
We propose a communication-efficient Decentralised Federated Learning (DFL) algorithm able to cope with them.
Our solution allows devices communicating only with their direct neighbours to train an accurate model.
arXiv Detail & Related papers (2023-12-07T18:24:19Z) - REFT: Resource-Efficient Federated Training Framework for Heterogeneous
and Resource-Constrained Environments [2.117841684082203]
Federated Learning (FL) plays a critical role in distributed systems.
FL emerges as a privacy-enforcing sub-domain of machine learning.
We propose "Resource-Efficient Federated Training Framework for Heterogeneous and Resource-Constrained Environments"
arXiv Detail & Related papers (2023-08-25T20:33:30Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - Federated Reinforcement Learning at the Edge [1.4271989597349055]
Modern cyber-physical architectures use data collected from systems at different physical locations to learn appropriate behaviors and adapt to uncertain environments.
This paper considers a setup where multiple agents need to communicate efficiently in order to jointly solve a reinforcement learning problem over time-series data collected in a distributed manner.
An algorithm for achieving communication efficiency is proposed, supported with theoretical guarantees, practical implementations, and numerical evaluations.
arXiv Detail & Related papers (2021-12-11T03:28:59Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - CosSGD: Nonlinear Quantization for Communication-efficient Federated
Learning [62.65937719264881]
Federated learning facilitates learning across clients without transferring local data on these clients to a central server.
We propose a nonlinear quantization for compressed gradient descent, which can be easily utilized in federated learning.
Our system significantly reduces the communication cost by up to three orders of magnitude, while maintaining convergence and accuracy of the training process.
arXiv Detail & Related papers (2020-12-15T12:20:28Z) - Communication-Efficient and Distributed Learning Over Wireless Networks:
Principles and Applications [55.65768284748698]
Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems and beyond.
This article aims to provide a holistic overview of relevant communication and ML principles, and thereby present communication-efficient and distributed learning frameworks with selected use cases.
arXiv Detail & Related papers (2020-08-06T12:37:14Z) - Ternary Compression for Communication-Efficient Federated Learning [17.97683428517896]
Federated learning provides a potential solution to privacy-preserving and secure machine learning.
We propose a ternary federated averaging protocol (T-FedAvg) to reduce the upstream and downstream communication of federated learning systems.
Our results show that the proposed T-FedAvg is effective in reducing communication costs and can even achieve slightly better performance on non-IID data.
arXiv Detail & Related papers (2020-03-07T11:55:34Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.