Efficient Client Contribution Evaluation for Horizontal Federated
Learning
- URL: http://arxiv.org/abs/2102.13314v1
- Date: Fri, 26 Feb 2021 06:01:42 GMT
- Title: Efficient Client Contribution Evaluation for Horizontal Federated
Learning
- Authors: Jie Zhao, Xinghua Zhu, Jianzong Wang, Jing Xiao
- Abstract summary: The paper focuses on the horizontal FL framework, where client servers calculate parameter gradients over their local data, and upload the gradients to the central server.
The proposed method consistently outperforms the conventional leave-one-out method in terms of valuation authenticity as well as time complexity.
- Score: 20.70853611040455
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In federated learning (FL), fair and accurate measurement of the contribution
of each federated participant is of great significance. The level of
contribution not only provides a rational metric for distributing financial
benefits among federated participants, but also helps to discover malicious
participants that try to poison the FL framework. Previous methods for
contribution measurement were based on enumeration over possible combination of
federated participants. Their computation costs increase drastically with the
number of participants or feature dimensions, making them inapplicable in
practical situations. In this paper an efficient method is proposed to evaluate
the contributions of federated participants. This paper focuses on the
horizontal FL framework, where client servers calculate parameter gradients
over their local data, and upload the gradients to the central server. Before
aggregating the client gradients, the central server train a data value
estimator of the gradients using reinforcement learning techniques. As shown by
experimental results, the proposed method consistently outperforms the
conventional leave-one-out method in terms of valuation authenticity as well as
time complexity.
Related papers
- Over-the-Air Fair Federated Learning via Multi-Objective Optimization [52.295563400314094]
We propose an over-the-air fair federated learning algorithm (OTA-FFL) to train fair FL models.
Experiments demonstrate the superiority of OTA-FFL in achieving fairness and robust performance.
arXiv Detail & Related papers (2025-01-06T21:16:51Z) - DPVS-Shapley:Faster and Universal Contribution Evaluation Component in Federated Learning [1.740992908651449]
We introduce a component called Dynamic Pruning Validation Set Shapley (DPVS-Shapley)
This method accelerates the contribution assessment process by dynamically pruning the original dataset without compromising the evaluation's accuracy.
arXiv Detail & Related papers (2024-10-19T13:01:44Z) - Redefining Contributions: Shapley-Driven Federated Learning [3.9539878659683363]
Federated learning (FL) has emerged as a pivotal approach in machine learning.
It is challenging to ensure global model convergence when participants do not contribute equally and/or honestly.
This paper proposes a novel contribution assessment method called ShapFed for fine-grained evaluation of participant contributions in FL.
arXiv Detail & Related papers (2024-06-01T22:40:31Z) - Mitigating federated learning contribution allocation instability through randomized aggregation [1.827018440608344]
Federated learning (FL) is a collaborative and privacy-preserving Machine Learning paradigm.
A critical challenge in FL lies in fairly and accurately allocating contributions from diverse participants.
Inaccurate allocation can undermine trust, lead to unfair compensation, and thus participants may lack the incentive to join or actively contribute to the federation.
arXiv Detail & Related papers (2024-05-13T13:55:34Z) - Don't Forget What I did?: Assessing Client Contributions in Federated
Learning [9.56869689239781]
Federated Learning (FL) is a collaborative machine learning (ML) approach, where multiple clients participate in training an ML model without exposing the private data.
We propose a history-aware game-theoretic framework, called FLContrib, to assess client contributions when a subset of clients participate in each epoch of FL training.
To demonstrate the benefits of history-aware client contributions, we apply FLContrib to detect dishonest clients conducting data poisoning in FL training.
arXiv Detail & Related papers (2024-03-11T20:39:32Z) - FedImpro: Measuring and Improving Client Update in Federated Learning [77.68805026788836]
Federated Learning (FL) models often experience client drift caused by heterogeneous data.
We present an alternative perspective on client drift and aim to mitigate it by generating improved local models.
arXiv Detail & Related papers (2024-02-10T18:14:57Z) - Contribution Evaluation in Federated Learning: Examining Current
Approaches [1.3688201404977818]
In Federated Learning, clients with private and potentially heterogeneous data and compute resources come together to train a common model without raw data ever leaving their locale.
The Evaluation Contribution (CE) problem is Quantitatively evaluating the worth of these contributions is termed the Evaluation Contribution (CE) problem.
We benchmark some of the most promising state-of-the-art approaches, along with a new one we introduce, on MNIST and CIFAR-10, to showcase their differences.
arXiv Detail & Related papers (2023-11-16T12:32:44Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - DPP-based Client Selection for Federated Learning with Non-IID Data [97.1195165400568]
This paper proposes a client selection (CS) method to tackle the communication bottleneck of federated learning (FL)
We first analyze the effect of CS in FL and show that FL training can be accelerated by adequately choosing participants to diversify the training dataset in each round of training.
We leverage data profiling and determinantal point process (DPP) sampling techniques to develop an algorithm termed Federated Learning with DPP-based Participant Selection (FL-DP$3$S)
arXiv Detail & Related papers (2023-03-30T13:14:54Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.