Risk-Aware Accelerated Wireless Federated Learning with Heterogeneous
Clients
- URL: http://arxiv.org/abs/2401.09267v1
- Date: Wed, 17 Jan 2024 15:15:52 GMT
- Title: Risk-Aware Accelerated Wireless Federated Learning with Heterogeneous
Clients
- Authors: Mohamed Ads, Hesham ElSawy and Hossam S. Hassanein
- Abstract summary: Wireless Federated Learning (FL) is an emerging distributed machine learning paradigm.
This paper proposes a novel risk-aware accelerated FL framework that accounts for the clients heterogeneity in the amount of possessed data.
The proposed scheme is benchmarked against a conservative scheme (i.e., only allowing trustworthy devices) and an aggressive scheme (i.e. oblivious to the trust metric)
- Score: 21.104752782245257
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Wireless Federated Learning (FL) is an emerging distributed machine learning
paradigm, particularly gaining momentum in domains with confidential and
private data on mobile clients. However, the location-dependent performance, in
terms of transmission rates and susceptibility to transmission errors, poses
major challenges for wireless FL's convergence speed and accuracy. The
challenge is more acute for hostile environments without a metric that
authenticates the data quality and security profile of the clients. In this
context, this paper proposes a novel risk-aware accelerated FL framework that
accounts for the clients heterogeneity in the amount of possessed data,
transmission rates, transmission errors, and trustworthiness. Classifying
clients according to their location-dependent performance and trustworthiness
profiles, we propose a dynamic risk-aware global model aggregation scheme that
allows clients to participate in descending order of their transmission rates
and an ascending trustworthiness constraint. In particular, the transmission
rate is the dominant participation criterion for initial rounds to accelerate
the convergence speed. Our model then progressively relaxes the transmission
rate restriction to explore more training data at cell-edge clients. The
aggregation rounds incorporate a debiasing factor that accounts for
transmission errors. Risk-awareness is enabled by a validation set, where the
base station eliminates non-trustworthy clients at the fine-tuning stage. The
proposed scheme is benchmarked against a conservative scheme (i.e., only
allowing trustworthy devices) and an aggressive scheme (i.e., oblivious to the
trust metric). The numerical results highlight the superiority of the proposed
scheme in terms of accuracy and convergence speed when compared to both
benchmarks.
Related papers
- BACSA: A Bias-Aware Client Selection Algorithm for Privacy-Preserving Federated Learning in Wireless Healthcare Networks [0.5524804393257919]
We propose the Bias-Aware Client Selection Algorithm (BACSA), which detects user bias and strategically selects clients based on their bias profiles.
BACSA is suitable for sensitive healthcare applications where Quality of Service (QoS), privacy and security are paramount.
arXiv Detail & Related papers (2024-11-01T21:34:43Z) - FedGTST: Boosting Global Transferability of Federated Models via Statistics Tuning [26.093271475139417]
Federated Learning (FL) addresses issues by facilitating collaborations among clients, expanding the dataset indirectly, distributing computational costs, and preserving privacy.
We propose two enhancements to FL. First, we introduce a client-server exchange protocol that leverages cross-client Jacobian norms to boost transferability.
Second, we increase the average Jacobian norm across clients at the server, using this as a local regularizer to reduce cross-client Jacobian variance.
arXiv Detail & Related papers (2024-10-16T21:13:52Z) - ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - Certifiably Byzantine-Robust Federated Conformal Prediction [49.23374238798428]
We introduce a novel framework Rob-FCP, which executes robust federated conformal prediction effectively countering malicious clients.
We empirically demonstrate the robustness of Rob-FCP against diverse proportions of malicious clients under a variety of Byzantine attacks.
arXiv Detail & Related papers (2024-06-04T04:43:30Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - FedPrune: Towards Inclusive Federated Learning [1.308951527147782]
Federated learning (FL) is a distributed learning technique that trains a shared model over distributed data in a privacy-preserving manner.
We propose FedPrune; a system that tackles this challenge by pruning the global model for slow clients based on their device characteristics.
By using insights from Central Limit Theorem, FedPrune incorporates a new aggregation technique that achieves robust performance over non-IID data.
arXiv Detail & Related papers (2021-10-27T06:33:38Z) - Quantized Federated Learning under Transmission Delay and Outage
Constraints [30.892724364965005]
Federated learning is a viable distributed learning paradigm which trains a machine learning model collaboratively with massive mobile devices in the wireless edge.
In practical systems with limited radio resources, transmission of a large number of model parameters inevitably suffers from quantization errors (QE) and transmission outage (TO)
We propose a robust FL scheme, named FedTOE, which performs joint allocation of wireless resources and quantization bits across the clients to minimize the QE while making the clients have the same TO probability.
arXiv Detail & Related papers (2021-06-17T11:29:12Z) - Multi-Armed Bandit Based Client Scheduling for Federated Learning [91.91224642616882]
federated learning (FL) features ubiquitous properties such as reduction of communication overhead and preserving data privacy.
In each communication round of FL, the clients update local models based on their own data and upload their local updates via wireless channels.
This work provides a multi-armed bandit-based framework for online client scheduling (CS) in FL without knowing wireless channel state information and statistical characteristics of clients.
arXiv Detail & Related papers (2020-07-05T12:32:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.