Privacy and Fairness in Federated Learning: on the Perspective of
Trade-off
- URL: http://arxiv.org/abs/2306.14123v1
- Date: Sun, 25 Jun 2023 04:38:19 GMT
- Title: Privacy and Fairness in Federated Learning: on the Perspective of
Trade-off
- Authors: Huiqiang Chen, Tianqing Zhu, Tao Zhang, Wanlei Zhou, Philip S. Yu
- Abstract summary: Federated learning (FL) has been a hot topic in recent years.
As two crucial ethical notions, the interactions between privacy and fairness are comparatively less studied.
- Score: 58.204074436129716
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) has been a hot topic in recent years. Ever since it
was introduced, researchers have endeavored to devise FL systems that protect
privacy or ensure fair results, with most research focusing on one or the
other. As two crucial ethical notions, the interactions between privacy and
fairness are comparatively less studied. However, since privacy and fairness
compete, considering each in isolation will inevitably come at the cost of the
other. To provide a broad view of these two critical topics, we presented a
detailed literature review of privacy and fairness issues, highlighting unique
challenges posed by FL and solutions in federated settings. We further
systematically surveyed different interactions between privacy and fairness,
trying to reveal how privacy and fairness could affect each other and point out
new research directions in fair and private FL.
Related papers
- A Multivocal Literature Review on Privacy and Fairness in Federated Learning [1.6124402884077915]
Federated learning presents a way to revolutionize AI applications by eliminating the necessity for data sharing.
Recent research has demonstrated an inherent tension between privacy and fairness.
We argue that the relationship between privacy and fairness has been neglected, posing a critical risk for real-world applications.
arXiv Detail & Related papers (2024-08-16T11:15:52Z) - Linkage on Security, Privacy and Fairness in Federated Learning: New Balances and New Perspectives [48.48294460952039]
This survey offers comprehensive descriptions of the privacy, security, and fairness issues in federated learning.
We contend that there exists a trade-off between privacy and fairness and between security and sharing.
arXiv Detail & Related papers (2024-06-16T10:31:45Z) - Toward the Tradeoffs between Privacy, Fairness and Utility in Federated
Learning [10.473137837891162]
Federated Learning (FL) is a novel privacy-protection distributed machine learning paradigm.
We propose a privacy-protection fairness FL method to protect the privacy of the client model.
We conclude the relationship between privacy, fairness and utility, and there is a tradeoff between these.
arXiv Detail & Related papers (2023-11-30T02:19:35Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - Evaluating Trade-offs in Computer Vision Between Attribute Privacy,
Fairness and Utility [9.929258066313627]
This paper investigates tradeoffs between utility, fairness and attribute privacy in computer vision.
To create a variety of models with different preferences, we use adversarial methods to intervene on attributes relating to fairness and privacy.
arXiv Detail & Related papers (2023-02-15T19:20:51Z) - Unraveling Privacy Risks of Individual Fairness in Graph Neural Networks [66.0143583366533]
Graph neural networks (GNNs) have gained significant attraction due to their expansive real-world applications.
To build trustworthy GNNs, two aspects - fairness and privacy - have emerged as critical considerations.
Previous studies have separately examined the fairness and privacy aspects of GNNs, revealing their trade-off with GNN performance.
Yet, the interplay between these two aspects remains unexplored.
arXiv Detail & Related papers (2023-01-30T14:52:23Z) - Differential Privacy and Fairness in Decisions and Learning Tasks: A
Survey [50.90773979394264]
It reviews the conditions under which privacy and fairness may have aligned or contrasting goals.
It analyzes how and why DP may exacerbate bias and unfairness in decision problems and learning tasks.
arXiv Detail & Related papers (2022-02-16T16:50:23Z) - Federated Learning Meets Fairness and Differential Privacy [12.033944769247961]
This work presents an ethical federated learning model, incorporating all three measures simultaneously.
Experiments on the Adult, Bank and Dutch datasets highlight the resulting empirical interplay" between accuracy, fairness, and privacy.
arXiv Detail & Related papers (2021-08-23T04:59:16Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.