Towards Trustworthy Federated Learning
- URL: http://arxiv.org/abs/2503.03684v1
- Date: Wed, 05 Mar 2025 17:25:20 GMT
- Title: Towards Trustworthy Federated Learning
- Authors: Alina Basharat, Yijun Bian, Ping Xu, Zhi Tian,
- Abstract summary: This paper develops a comprehensive framework to address three critical trustworthy challenges in federated learning (FL)<n>To improve the system's defense against Byzantine attacks, we develop a Two-sided Norm Based Screening mechanism.<n>We also adopt a differential privacy-based scheme to prevent raw data at local clients from being inferred by curious parties.
- Score: 26.25193909843069
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper develops a comprehensive framework to address three critical trustworthy challenges in federated learning (FL): robustness against Byzantine attacks, fairness, and privacy preservation. To improve the system's defense against Byzantine attacks that send malicious information to bias the system's performance, we develop a Two-sided Norm Based Screening (TNBS) mechanism, which allows the central server to crop the gradients that have the l lowest norms and h highest norms. TNBS functions as a screening tool to filter out potential malicious participants whose gradients are far from the honest ones. To promote egalitarian fairness, we adopt the q-fair federated learning (q-FFL). Furthermore, we adopt a differential privacy-based scheme to prevent raw data at local clients from being inferred by curious parties. Convergence guarantees are provided for the proposed framework under different scenarios. Experimental results on real datasets demonstrate that the proposed framework effectively improves robustness and fairness while managing the trade-off between privacy and accuracy. This work appears to be the first study that experimentally and theoretically addresses fairness, privacy, and robustness in trustworthy FL.
Related papers
- RESFL: An Uncertainty-Aware Framework for Responsible Federated Learning by Balancing Privacy, Fairness and Utility in Autonomous Vehicles [6.3338980105224145]
Existing FL frameworks struggle to balance privacy, fairness, and robustness, leading to performance disparities across demographic groups.
This work explores the trade-off between privacy and fairness in FL-based object detection for AVs and introduces RESFL, an integrated solution optimizing both.
RESFL incorporates adversarial privacy disentanglement and uncertainty-guided fairness-aware aggregation.
We evaluate RESFL on the FACET dataset and CARLA simulator, assessing accuracy, fairness, privacy resilience, and robustness under varying conditions.
arXiv Detail & Related papers (2025-03-20T15:46:03Z) - TPFL: A Trustworthy Personalized Federated Learning Framework via Subjective Logic [13.079535924498977]
Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy.
Most FL approaches focusing solely on privacy protection fall short in scenarios where trustworthiness is crucial.
We introduce Trustworthy Personalized Federated Learning framework designed for classification tasks via subjective logic.
arXiv Detail & Related papers (2024-10-16T07:33:29Z) - PUFFLE: Balancing Privacy, Utility, and Fairness in Federated Learning [2.8304839563562436]
Training and deploying Machine Learning models that simultaneously adhere to principles of fairness and privacy poses a significant challenge.
We introduce PUFFLE, a high-level parameterised approach that can help in the exploration of the balance between utility, privacy, and fairness in FL scenarios.
We prove that PUFFLE can be effective across diverse datasets, models, and data distributions, reducing the model unfairness up to 75%, with a maximum reduction in the utility of 17% in the worst-case scenario.
arXiv Detail & Related papers (2024-07-21T17:22:18Z) - Certifiably Byzantine-Robust Federated Conformal Prediction [49.23374238798428]
We introduce a novel framework Rob-FCP, which executes robust federated conformal prediction effectively countering malicious clients.
We empirically demonstrate the robustness of Rob-FCP against diverse proportions of malicious clients under a variety of Byzantine attacks.
arXiv Detail & Related papers (2024-06-04T04:43:30Z) - FedFDP: Fairness-Aware Federated Learning with Differential Privacy [28.58589747796768]
Federated learning (FL) is an emerging machine learning paradigm designed to address the challenge of data silos.<n>To tackle persistent issues related to fairness and data privacy, we propose a fairness-aware FL algorithm called FedFair.<n>Building on FedFair, we introduce differential privacy to create the FedFDP algorithm, which addresses trade-offs among fairness, privacy protection, and model performance.
arXiv Detail & Related papers (2024-02-25T08:35:21Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Federated Conformal Predictors for Distributed Uncertainty
Quantification [83.50609351513886]
Conformal prediction is emerging as a popular paradigm for providing rigorous uncertainty quantification in machine learning.
In this paper, we extend conformal prediction to the federated learning setting.
We propose a weaker notion of partial exchangeability, better suited to the FL setting, and use it to develop the Federated Conformal Prediction framework.
arXiv Detail & Related papers (2023-05-27T19:57:27Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Federated Neural Collaborative Filtering [0.0]
We present a federated version of the state-of-the-art Neural Collaborative Filtering (NCF) approach for item recommendations.
The system, named FedNCF, allows learning without requiring users to expose or transmit their raw data.
We discuss the peculiarities observed in the application of FL in a collaborative filtering (CF) task as well as we evaluate the privacy-preserving mechanism in terms of computational cost.
arXiv Detail & Related papers (2021-06-02T21:05:41Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.