On the Impact of Multi-dimensional Local Differential Privacy on
Fairness
- URL: http://arxiv.org/abs/2312.04404v3
- Date: Wed, 21 Feb 2024 09:06:49 GMT
- Title: On the Impact of Multi-dimensional Local Differential Privacy on
Fairness
- Authors: Karima Makhlouf, Heber H. Arcolezi, Sami Zhioua, Ghassen Ben Brahim,
and Catuscia Palamidessi
- Abstract summary: We examine the impact of differential privacy (LDP) in the presence of several sensitive attributes on fairness.
In particular, multi-dimensional LDP is an efficient approach to reduce disparity.
We summarize our findings in the form of recommendations to guide practitioners in adopting effective privacy-preserving practices.
- Score: 5.237044436478256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated decision systems are increasingly used to make consequential
decisions in people's lives. Due to the sensitivity of the manipulated data as
well as the resulting decisions, several ethical concerns need to be addressed
for the appropriate use of such technologies, in particular, fairness and
privacy. Unlike previous work, which focused on centralized differential
privacy (DP) or local DP (LDP) for a single sensitive attribute, in this paper,
we examine the impact of LDP in the presence of several sensitive attributes
(i.e., multi-dimensional data) on fairness. Detailed empirical analysis on
synthetic and benchmark datasets revealed very relevant observations. In
particular, (1) multi-dimensional LDP is an efficient approach to reduce
disparity, (2) the multi-dimensional approach of LDP (independent vs. combined)
matters only at low privacy guarantees, and (3) the outcome Y distribution has
an important effect on which group is more sensitive to the obfuscation. Last,
we summarize our findings in the form of recommendations to guide practitioners
in adopting effective privacy-preserving practices while maintaining fairness
and utility in ML applications.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - A Systematic and Formal Study of the Impact of Local Differential Privacy on Fairness: Preliminary Results [5.618541935188389]
Differential privacy (DP) is the predominant solution for privacy-preserving Machine learning (ML) algorithms.
Recent experimental studies have shown that local DP can impact ML prediction for different subgroups of individuals.
We study how the fairness of the decisions made by the ML model changes under local DP for different levels of privacy and data distributions.
arXiv Detail & Related papers (2024-05-23T15:54:03Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Privacy for Fairness: Information Obfuscation for Fair Representation
Learning with Local Differential Privacy [26.307780067808565]
This study introduces a theoretical framework that enables a comprehensive examination of the interplay between privacy and fairness.
We shall develop and analyze an information bottleneck (IB) based information obfuscation method with local differential privacy (LDP) for fair representation learning.
In contrast to many empirical studies on fairness in ML, we show that the incorporation of LDP randomizers during the encoding process can enhance the fairness of the learned representation.
arXiv Detail & Related papers (2024-02-16T06:35:10Z) - (Local) Differential Privacy has NO Disparate Impact on Fairness [4.157415305926584]
Local Differential Privacy (LDP) has gained widespread adoption in real-world applications.
This paper empirically studies the impact of collecting multiple sensitive attributes under LDP on fairness.
arXiv Detail & Related papers (2023-04-25T14:18:12Z) - DP2-Pub: Differentially Private High-Dimensional Data Publication with
Invariant Post Randomization [58.155151571362914]
We propose a differentially private high-dimensional data publication mechanism (DP2-Pub) that runs in two phases.
splitting attributes into several low-dimensional clusters with high intra-cluster cohesion and low inter-cluster coupling helps obtain a reasonable privacy budget.
We also extend our DP2-Pub mechanism to the scenario with a semi-honest server which satisfies local differential privacy.
arXiv Detail & Related papers (2022-08-24T17:52:43Z) - Production of Categorical Data Verifying Differential Privacy:
Conception and Applications to Machine Learning [0.0]
Differential privacy is a formal definition that allows quantifying the privacy-utility trade-off.
With the local DP (LDP) model, users can sanitize their data locally before transmitting it to the server.
In all cases, we concluded that differentially private ML models achieve nearly the same utility metrics as non-private ones.
arXiv Detail & Related papers (2022-04-02T12:50:14Z) - Post-processing of Differentially Private Data: A Fairness Perspective [53.29035917495491]
This paper shows that post-processing causes disparate impacts on individuals or groups.
It analyzes two critical settings: the release of differentially private datasets and the use of such private datasets for downstream decisions.
It proposes a novel post-processing mechanism that is (approximately) optimal under different fairness metrics.
arXiv Detail & Related papers (2022-01-24T02:45:03Z) - Partial sensitivity analysis in differential privacy [58.730520380312676]
We investigate the impact of each input feature on the individual's privacy loss.
We experimentally evaluate our approach on queries over private databases.
We also explore our findings in the context of neural network training on synthetic data.
arXiv Detail & Related papers (2021-09-22T08:29:16Z) - Differentially Private Deep Learning under the Fairness Lens [34.28936739262812]
Differential Privacy (DP) is an important privacy-enhancing technology for private machine learning systems.
It allows to measure and bound the risk associated with an individual participation in a computation.
It was recently observed that DP learning systems may exacerbate bias and unfairness for different groups of individuals.
arXiv Detail & Related papers (2021-06-04T19:10:09Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.