A Reputation Mechanism Is All You Need: Collaborative Fairness and
Adversarial Robustness in Federated Learning
- URL: http://arxiv.org/abs/2011.10464v2
- Date: Tue, 27 Jul 2021 12:39:59 GMT
- Title: A Reputation Mechanism Is All You Need: Collaborative Fairness and
Adversarial Robustness in Federated Learning
- Authors: Xinyi Xu and Lingjuan Lyu
- Abstract summary: Federated learning (FL) is an emerging practical framework for effective and scalable machine learning.
In conventional FL, all participants receive the global model (equal rewards), which might be unfair to the high-contributing participants.
We propose a novel RFFL framework to achieve collaborative fairness and adversarial robustness simultaneously via a reputation mechanism.
- Score: 24.442595192268872
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is an emerging practical framework for effective and
scalable machine learning among multiple participants, such as end users,
organizations and companies. However, most existing FL or distributed learning
frameworks have not well addressed two important issues together: collaborative
fairness and adversarial robustness (e.g. free-riders and malicious
participants). In conventional FL, all participants receive the global model
(equal rewards), which might be unfair to the high-contributing participants.
Furthermore, due to the lack of a safeguard mechanism, free-riders or malicious
adversaries could game the system to access the global model for free or to
sabotage it. In this paper, we propose a novel Robust and Fair Federated
Learning (RFFL) framework to achieve collaborative fairness and adversarial
robustness simultaneously via a reputation mechanism. RFFL maintains a
reputation for each participant by examining their contributions via their
uploaded gradients (using vector similarity) and thus identifies
non-contributing or malicious participants to be removed. Our approach
differentiates itself by not requiring any auxiliary/validation dataset.
Extensive experiments on benchmark datasets show that RFFL can achieve high
fairness and is very robust to different types of adversaries while achieving
competitive predictive accuracy.
Related papers
- Redefining Contributions: Shapley-Driven Federated Learning [3.9539878659683363]
Federated learning (FL) has emerged as a pivotal approach in machine learning.
It is challenging to ensure global model convergence when participants do not contribute equally and/or honestly.
This paper proposes a novel contribution assessment method called ShapFed for fine-grained evaluation of participant contributions in FL.
arXiv Detail & Related papers (2024-06-01T22:40:31Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - FedABC: Targeting Fair Competition in Personalized Federated Learning [76.9646903596757]
Federated learning aims to collaboratively train models without accessing their client's local private data.
We propose a novel and generic PFL framework termed Federated Averaging via Binary Classification, dubbed FedABC.
In particular, we adopt the one-vs-all'' training strategy in each client to alleviate the unfair competition between classes.
arXiv Detail & Related papers (2023-02-15T03:42:59Z) - Accelerating Fair Federated Learning: Adaptive Federated Adam [0.0]
When datasets are not independent and identically distributed (non-IID), models trained by naive federated algorithms may be biased towards certain participants.
This is known as the fairness problem in federated learning.
We propose Adaptive Federated Adam (AdaFedAdam) to accelerate fair federated learning with alleviated bias.
arXiv Detail & Related papers (2023-01-23T10:56:12Z) - Improving Robust Fairness via Balance Adversarial Training [51.67643171193376]
Adversarial training (AT) methods are effective against adversarial attacks, yet they introduce severe disparity of accuracy and robustness between different classes.
We propose Adversarial Training (BAT) to address the robust fairness problem.
arXiv Detail & Related papers (2022-09-15T14:44:48Z) - FairFed: Enabling Group Fairness in Federated Learning [22.913999279079878]
Federated learning has been viewed as a promising solution for learning machine learning models among multiple parties.
We propose FairFed, a novel algorithm to enhance group fairness via a fairness-aware aggregation method.
Our proposed method outperforms the state-of-the-art fair federated learning frameworks under a high heterogeneous sensitive attribute distribution.
arXiv Detail & Related papers (2021-10-02T17:55:20Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - Collaborative Fairness in Federated Learning [24.7378023761443]
We propose a novel Collaborative Fair Federated Learning (CFFL) framework for deep learning.
CFFL enforces participants to converge to different models, thus achieving fairness without compromising predictive performance.
Experiments on benchmark datasets demonstrate that CFFL achieves high fairness and delivers comparable accuracy to the Distributed framework.
arXiv Detail & Related papers (2020-08-27T14:39:09Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.