Fair Representation Learning using Interpolation Enabled Disentanglement
- URL: http://arxiv.org/abs/2108.00295v1
- Date: Sat, 31 Jul 2021 17:32:12 GMT
- Title: Fair Representation Learning using Interpolation Enabled Disentanglement
- Authors: Akshita Jha, Bhanukiran Vinzamuri, Chandan K. Reddy
- Abstract summary: We propose a novel method to address two key issues: (a) Can we simultaneously learn fair disentangled representations while ensuring the utility of the learned representation for downstream tasks, and (b)Can we provide theoretical insights into when the proposed approach will be both fair and accurate.
To address the former, we propose the method FRIED, Fair Representation learning using Interpolation Enabled Disentanglement.
- Score: 9.043741281011304
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the growing interest in the machine learning community to solve
real-world problems, it has become crucial to uncover the hidden reasoning
behind their decisions by focusing on the fairness and auditing the predictions
made by these black-box models. In this paper, we propose a novel method to
address two key issues: (a) Can we simultaneously learn fair disentangled
representations while ensuring the utility of the learned representation for
downstream tasks, and (b)Can we provide theoretical insights into when the
proposed approach will be both fair and accurate. To address the former, we
propose the method FRIED, Fair Representation learning using Interpolation
Enabled Disentanglement. In our architecture, by imposing a critic-based
adversarial framework, we enforce the interpolated points in the latent space
to be more realistic. This helps in capturing the data manifold effectively and
enhances the utility of the learned representation for downstream prediction
tasks. We address the latter question by developing a theory on
fairness-accuracy trade-offs using classifier-based conditional mutual
information estimation. We demonstrate the effectiveness of FRIED on datasets
of different modalities - tabular, text, and image datasets. We observe that
the representations learned by FRIED are overall fairer in comparison to
existing baselines and also accurate for downstream prediction tasks.
Additionally, we evaluate FRIED on a real-world healthcare claims dataset where
we conduct an expert aided model auditing study providing useful insights into
opioid ad-diction patterns.
Related papers
- Rethinking Fair Representation Learning for Performance-Sensitive Tasks [19.40265690963578]
We use causal reasoning to define and formalise different sources of dataset bias.
We run experiments across a range of medical modalities to examine the performance of fair representation learning under distribution shifts.
arXiv Detail & Related papers (2024-10-05T11:01:16Z) - Debiasing Graph Representation Learning based on Information Bottleneck [18.35405511009332]
We present the design and implementation of GRAFair, a new framework based on a variational graph auto-encoder.
The crux of GRAFair is the Conditional Fairness Bottleneck, where the objective is to capture the trade-off between the utility of representations and sensitive information of interest.
Experiments on various real-world datasets demonstrate the effectiveness of our proposed method in terms of fairness, utility, robustness, and stability.
arXiv Detail & Related papers (2024-09-02T16:45:23Z) - Disentangled Representation Learning with Transmitted Information Bottleneck [57.22757813140418]
We present textbfDisTIB (textbfTransmitted textbfInformation textbfBottleneck for textbfDisd representation learning), a novel objective that navigates the balance between information compression and preservation.
arXiv Detail & Related papers (2023-11-03T03:18:40Z) - An Operational Perspective to Fairness Interventions: Where and How to
Intervene [9.833760837977222]
We present a holistic framework for evaluating and contextualizing fairness interventions.
We demonstrate our framework with a case study on predictive parity.
We find predictive parity is difficult to achieve without using group data.
arXiv Detail & Related papers (2023-02-03T07:04:33Z) - Disentangled Representation with Causal Constraints for Counterfactual
Fairness [25.114619307838602]
This work theoretically demonstrates that using the structured representations enable downstream predictive models to achieve counterfactual fairness.
We propose the Counterfactual Fairness Variational AutoEncoder (CF-VAE) to obtain structured representations with respect to domain knowledge.
The experimental results show that the proposed method achieves better fairness and accuracy performance than the benchmark fairness methods.
arXiv Detail & Related papers (2022-08-19T04:47:58Z) - Variational Distillation for Multi-View Learning [104.17551354374821]
We design several variational information bottlenecks to exploit two key characteristics for multi-view representation learning.
Under rigorously theoretical guarantee, our approach enables IB to grasp the intrinsic correlation between observations and semantic labels.
arXiv Detail & Related papers (2022-06-20T03:09:46Z) - Conditional Supervised Contrastive Learning for Fair Text Classification [59.813422435604025]
We study learning fair representations that satisfy a notion of fairness known as equalized odds for text classification via contrastive learning.
Specifically, we first theoretically analyze the connections between learning representations with a fairness constraint and conditional supervised contrastive objectives.
arXiv Detail & Related papers (2022-05-23T17:38:30Z) - Learning Bias-Invariant Representation by Cross-Sample Mutual
Information Minimization [77.8735802150511]
We propose a cross-sample adversarial debiasing (CSAD) method to remove the bias information misused by the target task.
The correlation measurement plays a critical role in adversarial debiasing and is conducted by a cross-sample neural mutual information estimator.
We conduct thorough experiments on publicly available datasets to validate the advantages of the proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2021-08-11T21:17:02Z) - Which Mutual-Information Representation Learning Objectives are
Sufficient for Control? [80.2534918595143]
Mutual information provides an appealing formalism for learning representations of data.
This paper formalizes the sufficiency of a state representation for learning and representing the optimal policy.
Surprisingly, we find that two of these objectives can yield insufficient representations given mild and common assumptions on the structure of the MDP.
arXiv Detail & Related papers (2021-06-14T10:12:34Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.