Evaluating Trade-offs in Computer Vision Between Attribute Privacy,
Fairness and Utility
- URL: http://arxiv.org/abs/2302.07917v1
- Date: Wed, 15 Feb 2023 19:20:51 GMT
- Title: Evaluating Trade-offs in Computer Vision Between Attribute Privacy,
Fairness and Utility
- Authors: William Paul, Philip Mathew, Fady Alajaji, Philippe Burlina
- Abstract summary: This paper investigates tradeoffs between utility, fairness and attribute privacy in computer vision.
To create a variety of models with different preferences, we use adversarial methods to intervene on attributes relating to fairness and privacy.
- Score: 9.929258066313627
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates to what degree and magnitude tradeoffs exist between
utility, fairness and attribute privacy in computer vision. Regarding privacy,
we look at this important problem specifically in the context of attribute
inference attacks, a less addressed form of privacy. To create a variety of
models with different preferences, we use adversarial methods to intervene on
attributes relating to fairness and privacy. We see that that certain tradeoffs
exist between fairness and utility, privacy and utility, and between privacy
and fairness. The results also show that those tradeoffs and interactions are
more complex and nonlinear between the three goals than intuition would
suggest.
Related papers
- Centering Policy and Practice: Research Gaps around Usable Differential Privacy [12.340264479496375]
We argue that while differential privacy is a clean formulation in theory, it poses significant challenges in practice.
To bridge the gaps between differential privacy's promises and its real-world usability, researchers and practitioners must work together.
arXiv Detail & Related papers (2024-06-17T21:32:30Z) - Linkage on Security, Privacy and Fairness in Federated Learning: New Balances and New Perspectives [48.48294460952039]
This survey offers comprehensive descriptions of the privacy, security, and fairness issues in federated learning.
We contend that there exists a trade-off between privacy and fairness and between security and sharing.
arXiv Detail & Related papers (2024-06-16T10:31:45Z) - Toward the Tradeoffs between Privacy, Fairness and Utility in Federated
Learning [10.473137837891162]
Federated Learning (FL) is a novel privacy-protection distributed machine learning paradigm.
We propose a privacy-protection fairness FL method to protect the privacy of the client model.
We conclude the relationship between privacy, fairness and utility, and there is a tradeoff between these.
arXiv Detail & Related papers (2023-11-30T02:19:35Z) - Holistic Survey of Privacy and Fairness in Machine Learning [10.399352534861292]
Privacy and fairness are crucial pillars of responsible Artificial Intelligence (AI) and trustworthy Machine Learning (ML)
Despite significant interest, there remains an immediate demand for more in-depth research to unravel how these two objectives can be simultaneously integrated into ML models.
We provide a thorough review of privacy and fairness in ML, including supervised, unsupervised, semi-supervised, and reinforcement learning.
arXiv Detail & Related papers (2023-07-28T23:39:29Z) - Privacy and Fairness in Federated Learning: on the Perspective of
Trade-off [58.204074436129716]
Federated learning (FL) has been a hot topic in recent years.
As two crucial ethical notions, the interactions between privacy and fairness are comparatively less studied.
arXiv Detail & Related papers (2023-06-25T04:38:19Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - More Than Privacy: Applying Differential Privacy in Key Areas of
Artificial Intelligence [62.3133247463974]
We show that differential privacy can do more than just privacy preservation in AI.
It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI.
arXiv Detail & Related papers (2020-08-05T03:07:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.