Holistic Survey of Privacy and Fairness in Machine Learning
- URL: http://arxiv.org/abs/2307.15838v1
- Date: Fri, 28 Jul 2023 23:39:29 GMT
- Title: Holistic Survey of Privacy and Fairness in Machine Learning
- Authors: Sina Shaham, Arash Hajisafi, Minh K Quan, Dinh C Nguyen, Bhaskar
Krishnamachari, Charith Peris, Gabriel Ghinita, Cyrus Shahabi, Pubudu N.
Pathirana
- Abstract summary: Privacy and fairness are crucial pillars of responsible Artificial Intelligence (AI) and trustworthy Machine Learning (ML)
Despite significant interest, there remains an immediate demand for more in-depth research to unravel how these two objectives can be simultaneously integrated into ML models.
We provide a thorough review of privacy and fairness in ML, including supervised, unsupervised, semi-supervised, and reinforcement learning.
- Score: 10.399352534861292
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Privacy and fairness are two crucial pillars of responsible Artificial
Intelligence (AI) and trustworthy Machine Learning (ML). Each objective has
been independently studied in the literature with the aim of reducing utility
loss in achieving them. Despite the significant interest attracted from both
academia and industry, there remains an immediate demand for more in-depth
research to unravel how these two objectives can be simultaneously integrated
into ML models. As opposed to well-accepted trade-offs, i.e., privacy-utility
and fairness-utility, the interrelation between privacy and fairness is not
well-understood. While some works suggest a trade-off between the two objective
functions, there are others that demonstrate the alignment of these functions
in certain scenarios. To fill this research gap, we provide a thorough review
of privacy and fairness in ML, including supervised, unsupervised,
semi-supervised, and reinforcement learning. After examining and consolidating
the literature on both objectives, we present a holistic survey on the impact
of privacy on fairness, the impact of fairness on privacy, existing
architectures, their interaction in application domains, and algorithms that
aim to achieve both objectives while minimizing the utility sacrificed.
Finally, we identify research challenges in achieving privacy and fairness
concurrently in ML, particularly focusing on large language models.
Related papers
- A Multivocal Literature Review on Privacy and Fairness in Federated Learning [1.6124402884077915]
Federated learning presents a way to revolutionize AI applications by eliminating the necessity for data sharing.
Recent research has demonstrated an inherent tension between privacy and fairness.
We argue that the relationship between privacy and fairness has been neglected, posing a critical risk for real-world applications.
arXiv Detail & Related papers (2024-08-16T11:15:52Z) - Linkage on Security, Privacy and Fairness in Federated Learning: New Balances and New Perspectives [48.48294460952039]
This survey offers comprehensive descriptions of the privacy, security, and fairness issues in federated learning.
We contend that there exists a trade-off between privacy and fairness and between security and sharing.
arXiv Detail & Related papers (2024-06-16T10:31:45Z) - Rethinking Machine Unlearning for Large Language Models [85.92660644100582]
We explore machine unlearning in the domain of large language models (LLMs)
This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities.
arXiv Detail & Related papers (2024-02-13T20:51:58Z) - Robustness, Efficiency, or Privacy: Pick Two in Machine Learning [7.278033100480175]
This paper examines the costs associated with achieving privacy and robustness in distributed machine learning architectures.
Traditional noise injection hurts accuracy by concealing poisoned inputs, while cryptographic methods clash with poisoning defenses due to their non-linear nature.
We outline future research directions aimed at reconciling this compromise with efficiency by considering weaker threat models.
arXiv Detail & Related papers (2023-12-22T14:10:07Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - Privacy and Fairness in Federated Learning: on the Perspective of
Trade-off [58.204074436129716]
Federated learning (FL) has been a hot topic in recent years.
As two crucial ethical notions, the interactions between privacy and fairness are comparatively less studied.
arXiv Detail & Related papers (2023-06-25T04:38:19Z) - Learning with Impartiality to Walk on the Pareto Frontier of Fairness,
Privacy, and Utility [28.946180502706504]
We argue that machine learning pipelines should not favor one objective over another.
We propose impartially-specified models that show the inherent trade-offs between the objectives.
We provide an answer to the question of where fairness mitigation should be integrated within a privacy-aware ML pipeline.
arXiv Detail & Related papers (2023-02-17T23:23:45Z) - Understanding the origin of information-seeking exploration in
probabilistic objectives for control [62.997667081978825]
An exploration-exploitation trade-off is central to the description of adaptive behaviour.
One approach to solving this trade-off has been to equip or propose that agents possess an intrinsic 'exploratory drive'
We show that this combination of utility maximizing and information-seeking behaviour arises from the minimization of an entirely difference class of objectives.
arXiv Detail & Related papers (2021-03-11T18:42:39Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.