Federated Learning Meets Fairness and Differential Privacy
- URL: http://arxiv.org/abs/2108.09932v1
- Date: Mon, 23 Aug 2021 04:59:16 GMT
- Title: Federated Learning Meets Fairness and Differential Privacy
- Authors: Manisha Padala, Sankarshan Damle and Sujit Gujar
- Abstract summary: This work presents an ethical federated learning model, incorporating all three measures simultaneously.
Experiments on the Adult, Bank and Dutch datasets highlight the resulting empirical interplay" between accuracy, fairness, and privacy.
- Score: 12.033944769247961
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning's unprecedented success raises several ethical concerns ranging
from biased predictions to data privacy. Researchers tackle these issues by
introducing fairness metrics, or federated learning, or differential privacy. A
first, this work presents an ethical federated learning model, incorporating
all three measures simultaneously. Experiments on the Adult, Bank and Dutch
datasets highlight the resulting ``empirical interplay" between accuracy,
fairness, and privacy.
Related papers
- Concurrent vertical and horizontal federated learning with fuzzy cognitive maps [1.104960878651584]
This research introduces a novel federated learning framework employing fuzzy cognitive maps.
It is designed to comprehensively address the challenges posed by diverse data distributions and non-identically distributed features.
The results demonstrate the effectiveness of the approach in achieving the desired learning outcomes while maintaining privacy and confidentiality standards.
arXiv Detail & Related papers (2024-12-17T12:11:14Z) - A Multivocal Literature Review on Privacy and Fairness in Federated Learning [1.6124402884077915]
Federated learning presents a way to revolutionize AI applications by eliminating the necessity for data sharing.
Recent research has demonstrated an inherent tension between privacy and fairness.
We argue that the relationship between privacy and fairness has been neglected, posing a critical risk for real-world applications.
arXiv Detail & Related papers (2024-08-16T11:15:52Z) - Linkage on Security, Privacy and Fairness in Federated Learning: New Balances and New Perspectives [48.48294460952039]
This survey offers comprehensive descriptions of the privacy, security, and fairness issues in federated learning.
We contend that there exists a trade-off between privacy and fairness and between security and sharing.
arXiv Detail & Related papers (2024-06-16T10:31:45Z) - Federated Transfer Learning with Differential Privacy [21.50525027559563]
We formulate the notion of textitfederated differential privacy, which offers privacy guarantees for each data set without assuming a trusted central server.
We show that federated differential privacy is an intermediate privacy model between the well-established local and central models of differential privacy.
arXiv Detail & Related papers (2024-03-17T21:04:48Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - A chaotic maps-based privacy-preserving distributed deep learning for
incomplete and Non-IID datasets [1.30536490219656]
Federated Learning is a machine learning approach that enables the training of a deep learning model among several participants with sensitive data.
In this research, the authors employ a secured Federated Learning method with an additional layer of privacy and propose a method for addressing the non-IID challenge.
arXiv Detail & Related papers (2024-02-15T17:49:50Z) - Exploring Federated Unlearning: Analysis, Comparison, and Insights [101.64910079905566]
federated unlearning enables the selective removal of data from models trained in federated systems.
This paper examines existing federated unlearning approaches, examining their algorithmic efficiency, impact on model accuracy, and effectiveness in preserving privacy.
We propose the OpenFederatedUnlearning framework, a unified benchmark for evaluating federated unlearning methods.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - Privacy and Fairness in Federated Learning: on the Perspective of
Trade-off [58.204074436129716]
Federated learning (FL) has been a hot topic in recent years.
As two crucial ethical notions, the interactions between privacy and fairness are comparatively less studied.
arXiv Detail & Related papers (2023-06-25T04:38:19Z) - Towards Federated Long-Tailed Learning [76.50892783088702]
Data privacy and class imbalance are the norm rather than the exception in many machine learning tasks.
Recent attempts have been launched to, on one side, address the problem of learning from pervasive private data, and on the other side, learn from long-tailed data.
This paper focuses on learning with long-tailed (LT) data distributions under the context of the popular privacy-preserved federated learning (FL) framework.
arXiv Detail & Related papers (2022-06-30T02:34:22Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.