FedVal: Different good or different bad in federated learning
- URL: http://arxiv.org/abs/2306.04040v1
- Date: Tue, 6 Jun 2023 22:11:13 GMT
- Title: FedVal: Different good or different bad in federated learning
- Authors: Viktor Valadi, Xinchi Qiu, Pedro Porto Buarque de Gusm\~ao, Nicholas
D. Lane, Mina Alibeigi
- Abstract summary: Federated learning (FL) systems are susceptible to attacks from malicious actors.
FL poses new challenges in addressing group bias, such as ensuring fair performance for different demographic groups.
Traditional methods used to address such biases require centralized access to the data, which FL systems do not have.
We present a novel approach FedVal for both robustness and fairness that does not require any additional information from clients.
- Score: 9.558549875692808
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) systems are susceptible to attacks from malicious
actors who might attempt to corrupt the training model through various
poisoning attacks. FL also poses new challenges in addressing group bias, such
as ensuring fair performance for different demographic groups. Traditional
methods used to address such biases require centralized access to the data,
which FL systems do not have. In this paper, we present a novel approach FedVal
for both robustness and fairness that does not require any additional
information from clients that could raise privacy concerns and consequently
compromise the integrity of the FL system. To this end, we propose an
innovative score function based on a server-side validation method that
assesses client updates and determines the optimal aggregation balance between
locally-trained models. Our research shows that this approach not only provides
solid protection against poisoning attacks but can also be used to reduce group
bias and subsequently promote fairness while maintaining the system's
capability for differential privacy. Extensive experiments on the CIFAR-10,
FEMNIST, and PUMS ACSIncome datasets in different configurations demonstrate
the effectiveness of our method, resulting in state-of-the-art performances. We
have proven robustness in situations where 80% of participating clients are
malicious. Additionally, we have shown a significant increase in accuracy for
underrepresented labels from 32% to 53%, and increase in recall rate for
underrepresented features from 19% to 50%.
Related papers
- FedCert: Federated Accuracy Certification [8.34167718121698]
Federated Learning (FL) has emerged as a powerful paradigm for training machine learning models in a decentralized manner.
Previous studies have assessed the effectiveness of models in centralized training based on certified accuracy.
This study proposes a method named FedCert to take the first step toward evaluating the robustness of FL systems.
arXiv Detail & Related papers (2024-10-04T01:19:09Z) - Fed-Credit: Robust Federated Learning with Credibility Management [18.349127735378048]
Federated Learning (FL) is an emerging machine learning approach enabling model training on decentralized devices or data sources.
We propose a robust FL approach based on the credibility management scheme, called Fed-Credit.
The results exhibit superior accuracy and resilience against adversarial attacks, all while maintaining comparatively low computational complexity.
arXiv Detail & Related papers (2024-05-20T03:35:13Z) - Precision Guided Approach to Mitigate Data Poisoning Attacks in Federated Learning [4.907460152017894]
Federated Learning (FL) is a collaborative learning paradigm enabling participants to collectively train a shared machine learning model.
Current FL defense strategies against data poisoning attacks either involve a trade-off between accuracy and robustness.
We present FedZZ, which harnesses a zone-based deviating update (ZBDU) mechanism to effectively counter data poisoning attacks in FL.
arXiv Detail & Related papers (2024-04-05T14:37:49Z) - FedSkip: Combatting Statistical Heterogeneity with Federated Skip
Aggregation [95.85026305874824]
We introduce a data-driven approach called FedSkip to improve the client optima by periodically skipping federated averaging and scattering local models to the cross devices.
We conduct extensive experiments on a range of datasets to demonstrate that FedSkip achieves much higher accuracy, better aggregation efficiency and competing communication efficiency.
arXiv Detail & Related papers (2022-12-14T13:57:01Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Towards Multi-Objective Statistically Fair Federated Learning [1.2687030176231846]
Federated Learning (FL) has emerged as a result of data ownership and privacy concerns.
We propose a new FL framework that is able to satisfy multiple objectives including various statistical fairness metrics.
arXiv Detail & Related papers (2022-01-24T19:22:01Z) - FedPrune: Towards Inclusive Federated Learning [1.308951527147782]
Federated learning (FL) is a distributed learning technique that trains a shared model over distributed data in a privacy-preserving manner.
We propose FedPrune; a system that tackles this challenge by pruning the global model for slow clients based on their device characteristics.
By using insights from Central Limit Theorem, FedPrune incorporates a new aggregation technique that achieves robust performance over non-IID data.
arXiv Detail & Related papers (2021-10-27T06:33:38Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.