Privacy for Fairness: Information Obfuscation for Fair Representation
Learning with Local Differential Privacy
- URL: http://arxiv.org/abs/2402.10473v1
- Date: Fri, 16 Feb 2024 06:35:10 GMT
- Title: Privacy for Fairness: Information Obfuscation for Fair Representation
Learning with Local Differential Privacy
- Authors: Songjie Xie, Youlong Wu, Jiaxuan Li, Ming Ding, Khaled B. Letaief
- Abstract summary: This study introduces a theoretical framework that enables a comprehensive examination of the interplay between privacy and fairness.
We shall develop and analyze an information bottleneck (IB) based information obfuscation method with local differential privacy (LDP) for fair representation learning.
In contrast to many empirical studies on fairness in ML, we show that the incorporation of LDP randomizers during the encoding process can enhance the fairness of the learned representation.
- Score: 26.307780067808565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As machine learning (ML) becomes more prevalent in human-centric
applications, there is a growing emphasis on algorithmic fairness and privacy
protection. While previous research has explored these areas as separate
objectives, there is a growing recognition of the complex relationship between
privacy and fairness. However, previous works have primarily focused on
examining the interplay between privacy and fairness through empirical
investigations, with limited attention given to theoretical exploration. This
study aims to bridge this gap by introducing a theoretical framework that
enables a comprehensive examination of their interrelation. We shall develop
and analyze an information bottleneck (IB) based information obfuscation method
with local differential privacy (LDP) for fair representation learning. In
contrast to many empirical studies on fairness in ML, we show that the
incorporation of LDP randomizers during the encoding process can enhance the
fairness of the learned representation. Our analysis will demonstrate that the
disclosure of sensitive information is constrained by the privacy budget of the
LDP randomizer, thereby enabling the optimization process within the IB
framework to effectively suppress sensitive information while preserving the
desired utility through obfuscation. Based on the proposed method, we further
develop a variational representation encoding approach that simultaneously
achieves fairness and LDP. Our variational encoding approach offers practical
advantages. It is trained using a non-adversarial method and does not require
the introduction of any variational prior. Extensive experiments will be
presented to validate our theoretical results and demonstrate the ability of
our proposed approach to achieve both LDP and fairness while preserving
adequate utility.
Related papers
- Universally Harmonizing Differential Privacy Mechanisms for Federated Learning: Boosting Accuracy and Convergence [22.946928984205588]
Differentially private federated learning (DP-FL) is a promising technique for collaborative model training.
We propose the first DP-FL framework (namely UDP-FL) which universally harmonizes any randomization mechanism.
We show that UDP-FL exhibits substantial resilience against different inference attacks.
arXiv Detail & Related papers (2024-07-20T00:11:59Z) - On the Impact of Multi-dimensional Local Differential Privacy on
Fairness [5.237044436478256]
We examine the impact of differential privacy (LDP) in the presence of several sensitive attributes on fairness.
In particular, multi-dimensional LDP is an efficient approach to reduce disparity.
We summarize our findings in the form of recommendations to guide practitioners in adopting effective privacy-preserving practices.
arXiv Detail & Related papers (2023-12-07T16:17:34Z) - Fair Off-Policy Learning from Observational Data [30.77874108094485]
We propose a novel framework for fair off-policy learning.
We first formalize different fairness notions for off-policy learning.
We then propose a neural network-based framework to learn optimal policies under different fairness notions.
arXiv Detail & Related papers (2023-03-15T10:47:48Z) - Modeling Multiple Views via Implicitly Preserving Global Consistency and
Local Complementarity [61.05259660910437]
We propose a global consistency and complementarity network (CoCoNet) to learn representations from multiple views.
On the global stage, we reckon that the crucial knowledge is implicitly shared among views, and enhancing the encoder to capture such knowledge can improve the discriminability of the learned representations.
Lastly on the local stage, we propose a complementarity-factor, which joints cross-view discriminative knowledge, and it guides the encoders to learn not only view-wise discriminability but also cross-view complementary information.
arXiv Detail & Related papers (2022-09-16T09:24:00Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Variational Distillation for Multi-View Learning [104.17551354374821]
We design several variational information bottlenecks to exploit two key characteristics for multi-view representation learning.
Under rigorously theoretical guarantee, our approach enables IB to grasp the intrinsic correlation between observations and semantic labels.
arXiv Detail & Related papers (2022-06-20T03:09:46Z) - Fair Representation Learning using Interpolation Enabled Disentanglement [9.043741281011304]
We propose a novel method to address two key issues: (a) Can we simultaneously learn fair disentangled representations while ensuring the utility of the learned representation for downstream tasks, and (b)Can we provide theoretical insights into when the proposed approach will be both fair and accurate.
To address the former, we propose the method FRIED, Fair Representation learning using Interpolation Enabled Disentanglement.
arXiv Detail & Related papers (2021-07-31T17:32:12Z) - Understanding the Interplay between Privacy and Robustness in Federated
Learning [15.673448030003788]
Federated Learning (FL) is emerging as a promising paradigm of privacy-preserving machine learning.
Recent works highlighted several privacy and robustness weaknesses in FL.
It is still not clear how LDP affects adversarial robustness in FL.
arXiv Detail & Related papers (2021-06-13T16:01:35Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.