Goal-Oriented Communications in Federated Learning via Feedback on
Risk-Averse Participation
- URL: http://arxiv.org/abs/2305.11633v1
- Date: Fri, 19 May 2023 12:20:37 GMT
- Title: Goal-Oriented Communications in Federated Learning via Feedback on
Risk-Averse Participation
- Authors: Shashi Raj Pandey, Van Phuc Bui, Petar Popovski
- Abstract summary: We treat the problem of client selection in a Federated Learning (FL) setup.
We incorporate the risk-averse nature of participants and obtain a communication-efficient on-device performance.
- Score: 34.71061940229006
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We treat the problem of client selection in a Federated Learning (FL) setup,
where the learning objective and the local incentives of the participants are
used to formulate a goal-oriented communication problem. Specifically, we
incorporate the risk-averse nature of participants and obtain a
communication-efficient on-device performance, while relying on feedback from
the Parameter Server (\texttt{PS}). A client has to decide its transmission
plan on when not to participate in FL. This is based on its intrinsic
incentive, which is the value of the trained global model upon participation by
this client. Poor updates not only plunge the performance of the global model
with added communication cost but also propagate the loss in performance on
other participating devices. We cast the relevance of local updates as
\emph{semantic information} for developing local transmission strategies, i.e.,
making a decision on when to ``not transmit". The devices use feedback about
the state of the PS and evaluate their contributions in training the learning
model in each aggregation period, which eventually lowers the number of
occupied connections. Simulation results validate the efficacy of our proposed
approach, with up to $1.4\times$ gain in communication links utilization as
compared with the baselines.
Related papers
- FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models [56.21666819468249]
Federated Learning (FL) has garnered increasing attention due to its unique characteristic of allowing heterogeneous clients to process their private data locally and interact with a central server.
We introduce FedComLoc, integrating practical and effective compression into emphScaffnew to further enhance communication efficiency.
arXiv Detail & Related papers (2024-03-14T22:29:59Z) - Efficient Cross-Domain Federated Learning by MixStyle Approximation [0.3277163122167433]
We introduce a privacy-preserving, resource-efficient Federated Learning concept for client adaptation in hardware-constrained environments.
Our approach includes server model pre-training on source data and subsequent fine-tuning on target data via low-end clients.
Preliminary results indicate that our method reduces computational and transmission costs while maintaining competitive performance on downstream tasks.
arXiv Detail & Related papers (2023-12-12T08:33:34Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Asynchronous Online Federated Learning with Reduced Communication
Requirements [6.282767337715445]
We propose a communication-efficient asynchronous online federated learning (PAO-Fed) strategy.
By reducing the communication overhead of the participants, the proposed method renders participation in the learning task more accessible and efficient.
We conduct comprehensive simulations to study the performance of the proposed method on both synthetic and real-life datasets.
arXiv Detail & Related papers (2023-03-27T14:06:05Z) - On the Design of Communication-Efficient Federated Learning for Health
Monitoring [21.433739206682404]
We propose a communication-efficient federated learning (CEFL) framework that involves clients clustering and transfer learning.
CEFL can save up to 98.45% in communication costs while conceding less than 3% in accuracy loss, when compared to the conventional FL.
arXiv Detail & Related papers (2022-11-30T12:52:23Z) - Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM [62.62684911017472]
Federated learning (FL) enables devices to jointly train shared models while keeping the training data local for privacy purposes.
We introduce a VFL framework with multiple heads (VIM), which takes the separate contribution of each client into account.
VIM achieves significantly higher performance and faster convergence compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-20T23:14:33Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - Over-The-Air Federated Learning under Byzantine Attacks [43.67333971183711]
Federated learning (FL) is a promising solution to enable many AI applications.
FL allows the clients to participate in the training phase, governed by a central server, without sharing their local data.
One of the main challenges of FL is the communication overhead.
We propose a transmission and aggregation framework to reduce the effect of such attacks.
arXiv Detail & Related papers (2022-05-05T22:09:21Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.