Evaluating Trade-offs in Computer Vision Between Attribute Privacy,
Fairness and Utility
- URL: http://arxiv.org/abs/2302.07917v1
- Date: Wed, 15 Feb 2023 19:20:51 GMT
- Title: Evaluating Trade-offs in Computer Vision Between Attribute Privacy,
Fairness and Utility
- Authors: William Paul, Philip Mathew, Fady Alajaji, Philippe Burlina
- Abstract summary: This paper investigates tradeoffs between utility, fairness and attribute privacy in computer vision.
To create a variety of models with different preferences, we use adversarial methods to intervene on attributes relating to fairness and privacy.
- Score: 9.929258066313627
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates to what degree and magnitude tradeoffs exist between
utility, fairness and attribute privacy in computer vision. Regarding privacy,
we look at this important problem specifically in the context of attribute
inference attacks, a less addressed form of privacy. To create a variety of
models with different preferences, we use adversarial methods to intervene on
attributes relating to fairness and privacy. We see that that certain tradeoffs
exist between fairness and utility, privacy and utility, and between privacy
and fairness. The results also show that those tradeoffs and interactions are
more complex and nonlinear between the three goals than intuition would
suggest.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Identifying Privacy Personas [27.301741710016223]
Privacy personas capture the differences in user segments with respect to one's knowledge, behavioural patterns, level of self-efficacy, and perception of the importance of privacy protection.
While various privacy personas have been derived in the literature, they group together people who differ from each other in terms of important attributes.
We propose eight personas that we derive by combining qualitative and quantitative analysis of the responses to an interactive educational questionnaire.
arXiv Detail & Related papers (2024-10-17T20:49:46Z) - Centering Policy and Practice: Research Gaps around Usable Differential Privacy [12.340264479496375]
We argue that while differential privacy is a clean formulation in theory, it poses significant challenges in practice.
To bridge the gaps between differential privacy's promises and its real-world usability, researchers and practitioners must work together.
arXiv Detail & Related papers (2024-06-17T21:32:30Z) - Toward the Tradeoffs between Privacy, Fairness and Utility in Federated
Learning [10.473137837891162]
Federated Learning (FL) is a novel privacy-protection distributed machine learning paradigm.
We propose a privacy-protection fairness FL method to protect the privacy of the client model.
We conclude the relationship between privacy, fairness and utility, and there is a tradeoff between these.
arXiv Detail & Related papers (2023-11-30T02:19:35Z) - Privacy and Fairness in Federated Learning: on the Perspective of
Trade-off [58.204074436129716]
Federated learning (FL) has been a hot topic in recent years.
As two crucial ethical notions, the interactions between privacy and fairness are comparatively less studied.
arXiv Detail & Related papers (2023-06-25T04:38:19Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - More Than Privacy: Applying Differential Privacy in Key Areas of
Artificial Intelligence [62.3133247463974]
We show that differential privacy can do more than just privacy preservation in AI.
It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI.
arXiv Detail & Related papers (2020-08-05T03:07:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.