Privacy for Fairness: Information Obfuscation for Fair Representation
Learning with Local Differential Privacy
- URL: http://arxiv.org/abs/2402.10473v1
- Date: Fri, 16 Feb 2024 06:35:10 GMT
- Title: Privacy for Fairness: Information Obfuscation for Fair Representation
Learning with Local Differential Privacy
- Authors: Songjie Xie, Youlong Wu, Jiaxuan Li, Ming Ding, Khaled B. Letaief
- Abstract summary: This study introduces a theoretical framework that enables a comprehensive examination of the interplay between privacy and fairness.
We shall develop and analyze an information bottleneck (IB) based information obfuscation method with local differential privacy (LDP) for fair representation learning.
In contrast to many empirical studies on fairness in ML, we show that the incorporation of LDP randomizers during the encoding process can enhance the fairness of the learned representation.
- Score: 26.307780067808565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As machine learning (ML) becomes more prevalent in human-centric
applications, there is a growing emphasis on algorithmic fairness and privacy
protection. While previous research has explored these areas as separate
objectives, there is a growing recognition of the complex relationship between
privacy and fairness. However, previous works have primarily focused on
examining the interplay between privacy and fairness through empirical
investigations, with limited attention given to theoretical exploration. This
study aims to bridge this gap by introducing a theoretical framework that
enables a comprehensive examination of their interrelation. We shall develop
and analyze an information bottleneck (IB) based information obfuscation method
with local differential privacy (LDP) for fair representation learning. In
contrast to many empirical studies on fairness in ML, we show that the
incorporation of LDP randomizers during the encoding process can enhance the
fairness of the learned representation. Our analysis will demonstrate that the
disclosure of sensitive information is constrained by the privacy budget of the
LDP randomizer, thereby enabling the optimization process within the IB
framework to effectively suppress sensitive information while preserving the
desired utility through obfuscation. Based on the proposed method, we further
develop a variational representation encoding approach that simultaneously
achieves fairness and LDP. Our variational encoding approach offers practical
advantages. It is trained using a non-adversarial method and does not require
the introduction of any variational prior. Extensive experiments will be
presented to validate our theoretical results and demonstrate the ability of
our proposed approach to achieve both LDP and fairness while preserving
adequate utility.
Related papers
- Data Obfuscation through Latent Space Projection (LSP) for Privacy-Preserving AI Governance: Case Studies in Medical Diagnosis and Finance Fraud Detection [0.0]
This paper introduces Data Obfuscation through Latent Space Projection (LSP), a novel technique aimed at enhancing AI governance and ensuring Responsible AI compliance.
LSP uses machine learning to project sensitive data into a latent space, effectively obfuscating it while preserving essential features for model training and inference.
We validate LSP's effectiveness through experiments on benchmark datasets and two real-world case studies: healthcare cancer diagnosis and financial fraud analysis.
arXiv Detail & Related papers (2024-10-22T22:31:03Z) - A Comprehensive Survey on Evidential Deep Learning and Its Applications [64.83473301188138]
Evidential Deep Learning (EDL) provides reliable uncertainty estimation with minimal additional computation in a single forward pass.
We first delve into the theoretical foundation of EDL, the subjective logic theory, and discuss its distinctions from other uncertainty estimation frameworks.
We elaborate on its extensive applications across various machine learning paradigms and downstream tasks.
arXiv Detail & Related papers (2024-09-07T05:55:06Z) - Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding [118.75567341513897]
Existing methods typically analyze target text in isolation or solely with non-member contexts.
We propose Con-ReCall, a novel approach that leverages the asymmetric distributional shifts induced by member and non-member contexts.
arXiv Detail & Related papers (2024-09-05T09:10:38Z) - A Multivocal Literature Review on Privacy and Fairness in Federated Learning [1.6124402884077915]
Federated learning presents a way to revolutionize AI applications by eliminating the necessity for data sharing.
Recent research has demonstrated an inherent tension between privacy and fairness.
We argue that the relationship between privacy and fairness has been neglected, posing a critical risk for real-world applications.
arXiv Detail & Related papers (2024-08-16T11:15:52Z) - On the Impact of Multi-dimensional Local Differential Privacy on
Fairness [5.237044436478256]
We examine the impact of differential privacy (LDP) in the presence of several sensitive attributes on fairness.
In particular, multi-dimensional LDP is an efficient approach to reduce disparity.
We summarize our findings in the form of recommendations to guide practitioners in adopting effective privacy-preserving practices.
arXiv Detail & Related papers (2023-12-07T16:17:34Z) - Fair Off-Policy Learning from Observational Data [30.77874108094485]
We propose a novel framework for fair off-policy learning.
We first formalize different fairness notions for off-policy learning.
We then propose a neural network-based framework to learn optimal policies under different fairness notions.
arXiv Detail & Related papers (2023-03-15T10:47:48Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Fair Representation Learning using Interpolation Enabled Disentanglement [9.043741281011304]
We propose a novel method to address two key issues: (a) Can we simultaneously learn fair disentangled representations while ensuring the utility of the learned representation for downstream tasks, and (b)Can we provide theoretical insights into when the proposed approach will be both fair and accurate.
To address the former, we propose the method FRIED, Fair Representation learning using Interpolation Enabled Disentanglement.
arXiv Detail & Related papers (2021-07-31T17:32:12Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.