A Multivocal Literature Review on Privacy and Fairness in Federated Learning
- URL: http://arxiv.org/abs/2408.08666v2
- Date: Sun, 27 Oct 2024 11:08:39 GMT
- Title: A Multivocal Literature Review on Privacy and Fairness in Federated Learning
- Authors: Beatrice Balbierer, Lukas Heinlein, Domenique Zipperling, Niklas Kühl,
- Abstract summary: Federated learning presents a way to revolutionize AI applications by eliminating the necessity for data sharing.
Recent research has demonstrated an inherent tension between privacy and fairness.
We argue that the relationship between privacy and fairness has been neglected, posing a critical risk for real-world applications.
- Score: 1.6124402884077915
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated Learning presents a way to revolutionize AI applications by eliminating the necessity for data sharing. Yet, research has shown that information can still be extracted during training, making additional privacy-preserving measures such as differential privacy imperative. To implement real-world federated learning applications, fairness, ranging from a fair distribution of performance to non-discriminative behaviour, must be considered. Particularly in high-risk applications (e.g. healthcare), avoiding the repetition of past discriminatory errors is paramount. As recent research has demonstrated an inherent tension between privacy and fairness, we conduct a multivocal literature review to examine the current methods to integrate privacy and fairness in federated learning. Our analyses illustrate that the relationship between privacy and fairness has been neglected, posing a critical risk for real-world applications. We highlight the need to explore the relationship between privacy, fairness, and performance, advocating for the creation of integrated federated learning frameworks.
Related papers
- Concurrent vertical and horizontal federated learning with fuzzy cognitive maps [1.104960878651584]
This research introduces a novel federated learning framework employing fuzzy cognitive maps.
It is designed to comprehensively address the challenges posed by diverse data distributions and non-identically distributed features.
The results demonstrate the effectiveness of the approach in achieving the desired learning outcomes while maintaining privacy and confidentiality standards.
arXiv Detail & Related papers (2024-12-17T12:11:14Z) - DEAN: Deactivating the Coupled Neurons to Mitigate Fairness-Privacy Conflicts in Large Language Models [34.73299042341535]
We introduce a training-free method to textbfDEActivate the fairness and privacy coupled textbfNeurons (textbfDEAN), which theoretically and empirically decrease the mutual information between fairness and privacy awareness.
Extensive experimental results demonstrate that DEAN eliminates the trade-off phenomenon and significantly improves LLMs' fairness and privacy awareness simultaneously.
arXiv Detail & Related papers (2024-10-22T04:08:27Z) - Towards Split Learning-based Privacy-Preserving Record Linkage [49.1574468325115]
Split Learning has been introduced to facilitate applications where user data privacy is a requirement.
In this paper, we investigate the potentials of Split Learning for Privacy-Preserving Record Matching.
arXiv Detail & Related papers (2024-09-02T09:17:05Z) - Differentially Private Federated Learning: A Systematic Review [35.13641504685795]
We propose a new taxonomy of differentially private federated learning based on definition and guarantee of various differential privacy models and scenarios.
Our work provide valuable insights into privacy-preserving federated learning and suggest practical directions for future research.
arXiv Detail & Related papers (2024-05-14T03:49:14Z) - Privacy for Fairness: Information Obfuscation for Fair Representation
Learning with Local Differential Privacy [26.307780067808565]
This study introduces a theoretical framework that enables a comprehensive examination of the interplay between privacy and fairness.
We shall develop and analyze an information bottleneck (IB) based information obfuscation method with local differential privacy (LDP) for fair representation learning.
In contrast to many empirical studies on fairness in ML, we show that the incorporation of LDP randomizers during the encoding process can enhance the fairness of the learned representation.
arXiv Detail & Related papers (2024-02-16T06:35:10Z) - Exploring Federated Unlearning: Analysis, Comparison, and Insights [101.64910079905566]
federated unlearning enables the selective removal of data from models trained in federated systems.
This paper examines existing federated unlearning approaches, examining their algorithmic efficiency, impact on model accuracy, and effectiveness in preserving privacy.
We propose the OpenFederatedUnlearning framework, a unified benchmark for evaluating federated unlearning methods.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - Holistic Survey of Privacy and Fairness in Machine Learning [10.399352534861292]
Privacy and fairness are crucial pillars of responsible Artificial Intelligence (AI) and trustworthy Machine Learning (ML)
Despite significant interest, there remains an immediate demand for more in-depth research to unravel how these two objectives can be simultaneously integrated into ML models.
We provide a thorough review of privacy and fairness in ML, including supervised, unsupervised, semi-supervised, and reinforcement learning.
arXiv Detail & Related papers (2023-07-28T23:39:29Z) - A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning [58.107474025048866]
Forgetting refers to the loss or deterioration of previously acquired knowledge.
Forgetting is a prevalent phenomenon observed in various other research domains within deep learning.
arXiv Detail & Related papers (2023-07-16T16:27:58Z) - Privacy and Fairness in Federated Learning: on the Perspective of
Trade-off [58.204074436129716]
Federated learning (FL) has been a hot topic in recent years.
As two crucial ethical notions, the interactions between privacy and fairness are comparatively less studied.
arXiv Detail & Related papers (2023-06-25T04:38:19Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.