Towards Fair, Robust and Efficient Client Contribution Evaluation in
Federated Learning
- URL: http://arxiv.org/abs/2402.04409v1
- Date: Tue, 6 Feb 2024 21:07:12 GMT
- Title: Towards Fair, Robust and Efficient Client Contribution Evaluation in
Federated Learning
- Authors: Meiying Zhang, Huan Zhao, Sheldon Ebron, Kan Yang
- Abstract summary: We introduce a novel method called Fair, Robust, and Efficient Client Assessment (FRECA) for quantifying client contributions in Federated Learning (FL)
FRECA employs a framework called FedTruth to estimate the global model's ground truth update, balancing contributions from all clients while filtering out impacts from malicious ones.
Our experimental results show that FRECA can accurately and efficiently quantify client contributions in a robust manner.
- Score: 16.543724155324938
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The performance of clients in Federated Learning (FL) can vary due to various
reasons. Assessing the contributions of each client is crucial for client
selection and compensation. It is challenging because clients often have
non-independent and identically distributed (non-iid) data, leading to
potentially noisy or divergent updates. The risk of malicious clients amplifies
the challenge especially when there's no access to clients' local data or a
benchmark root dataset. In this paper, we introduce a novel method called Fair,
Robust, and Efficient Client Assessment (FRECA) for quantifying client
contributions in FL. FRECA employs a framework called FedTruth to estimate the
global model's ground truth update, balancing contributions from all clients
while filtering out impacts from malicious ones. This approach is robust
against Byzantine attacks and incorporates a Byzantine-resilient aggregation
algorithm. FRECA is also efficient, as it operates solely on local model
updates and requires no validation operations or datasets. Our experimental
results show that FRECA can accurately and efficiently quantify client
contributions in a robust manner.
Related papers
- ConDa: Fast Federated Unlearning with Contribution Dampening [46.074452659791575]
ConDa is a framework that performs efficient unlearning by tracking down the parameters which affect the global model for each client.
We perform experiments on multiple datasets and demonstrate that ConDa is effective to forget a client's data.
arXiv Detail & Related papers (2024-10-05T12:45:35Z) - FedAR: Addressing Client Unavailability in Federated Learning with Local Update Approximation and Rectification [8.747592727421596]
Federated learning (FL) enables clients to collaboratively train machine learning models under the coordination of a server.
FedAR can get all clients involved in the global model update to achieve a high-quality global model on the server.
FedAR also depicts impressive performance in the presence of a large number of clients with severe client unavailability.
arXiv Detail & Related papers (2024-07-26T21:56:52Z) - Federated Graph-based Sampling with Arbitrary Client Availability [34.95352685954059]
We propose a framework named Federated Graph-based Sampling (FedGS) to stabilize the global model update and mitigate the long-term bias given arbitrary client availability simultaneously.
Our experimental results confirm FedGS's advantage in both enabling a fair client-sampling scheme and improving the model performance under arbitrary client availability.
arXiv Detail & Related papers (2022-11-25T09:38:20Z) - Knowledge-Aware Federated Active Learning with Non-IID Data [75.98707107158175]
We propose a federated active learning paradigm to efficiently learn a global model with limited annotation budget.
The main challenge faced by federated active learning is the mismatch between the active sampling goal of the global model on the server and that of the local clients.
We propose Knowledge-Aware Federated Active Learning (KAFAL), which consists of Knowledge-Specialized Active Sampling (KSAS) and Knowledge-Compensatory Federated Update (KCFU)
arXiv Detail & Related papers (2022-11-24T13:08:43Z) - Robust Quantity-Aware Aggregation for Federated Learning [72.59915691824624]
Malicious clients can poison model updates and claim large quantities to amplify the impact of their model updates in the model aggregation.
Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients.
We propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities.
arXiv Detail & Related papers (2022-05-22T15:13:23Z) - Federated Learning Under Intermittent Client Availability and
Time-Varying Communication Constraints [29.897785907692644]
Federated learning systems operate in settings with intermittent client availability and/or time-varying communication constraints.
We propose F3AST, an unbiased algorithm that learns an availability-dependent client selection strategy.
We show up to 186% and 8% accuracy improvements over FedAvg, and 8% and 7% over FedAdam on CIFAR100 and Shakespeare, respectively.
arXiv Detail & Related papers (2022-05-13T16:08:58Z) - Communication-Efficient Federated Learning with Accelerated Client Gradient [46.81082897703729]
Federated learning often suffers from slow and unstable convergence due to the heterogeneous characteristics of participating client datasets.
We propose a simple but effective federated learning framework, which improves the consistency across clients and facilitates the convergence of the server model.
We provide the theoretical convergence rate of our algorithm and demonstrate remarkable performance gains in terms of accuracy and communication efficiency.
arXiv Detail & Related papers (2022-01-10T05:31:07Z) - Federated Noisy Client Learning [105.00756772827066]
Federated learning (FL) collaboratively aggregates a shared global model depending on multiple local clients.
Standard FL methods ignore the noisy client issue, which may harm the overall performance of the aggregated model.
We propose Federated Noisy Client Learning (Fed-NCL), which is a plug-and-play algorithm and contains two main components.
arXiv Detail & Related papers (2021-06-24T11:09:17Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.