Fairness by Learning Orthogonal Disentangled Representations
- URL: http://arxiv.org/abs/2003.05707v3
- Date: Sat, 4 Jul 2020 09:04:10 GMT
- Title: Fairness by Learning Orthogonal Disentangled Representations
- Authors: Mhd Hasan Sarhan, Nassir Navab, Abouzar Eslami, Shadi Albarqouni
- Abstract summary: We propose a novel disentanglement approach to invariant representation problem.
We enforce the meaningful representation to be agnostic to sensitive information by entropy.
The proposed approach is evaluated on five publicly available datasets.
- Score: 50.82638766862974
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning discriminative powerful representations is a crucial step for
machine learning systems. Introducing invariance against arbitrary nuisance or
sensitive attributes while performing well on specific tasks is an important
problem in representation learning. This is mostly approached by purging the
sensitive information from learned representations. In this paper, we propose a
novel disentanglement approach to invariant representation problem. We
disentangle the meaningful and sensitive representations by enforcing
orthogonality constraints as a proxy for independence. We explicitly enforce
the meaningful representation to be agnostic to sensitive information by
entropy maximization. The proposed approach is evaluated on five publicly
available datasets and compared with state of the art methods for learning
fairness and invariance achieving the state of the art performance on three
datasets and comparable performance on the rest. Further, we perform an
ablative study to evaluate the effect of each component.
Related papers
- Debiasing Graph Representation Learning based on Information Bottleneck [18.35405511009332]
We present the design and implementation of GRAFair, a new framework based on a variational graph auto-encoder.
The crux of GRAFair is the Conditional Fairness Bottleneck, where the objective is to capture the trade-off between the utility of representations and sensitive information of interest.
Experiments on various real-world datasets demonstrate the effectiveness of our proposed method in terms of fairness, utility, robustness, and stability.
arXiv Detail & Related papers (2024-09-02T16:45:23Z) - Provable Optimization for Adversarial Fair Self-supervised Contrastive Learning [49.417414031031264]
This paper studies learning fair encoders in a self-supervised learning setting.
All data are unlabeled and only a small portion of them are annotated with sensitive attributes.
arXiv Detail & Related papers (2024-06-09T08:11:12Z) - Efficient Information Extraction in Few-Shot Relation Classification through Contrastive Representation Learning [23.992247765851204]
We introduce a novel approach to enhance information extraction combining multiple sentence representations and contrastive learning.
Our method employs contrastive learning to extract complementary discriminative information from these individual representations.
arXiv Detail & Related papers (2024-03-25T08:36:06Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Variational Distillation for Multi-View Learning [104.17551354374821]
We design several variational information bottlenecks to exploit two key characteristics for multi-view representation learning.
Under rigorously theoretical guarantee, our approach enables IB to grasp the intrinsic correlation between observations and semantic labels.
arXiv Detail & Related papers (2022-06-20T03:09:46Z) - Learning Fair Representation via Distributional Contrastive
Disentanglement [9.577369164287813]
Learning fair representation is crucial for achieving fairness or debiasing sensitive information.
We propose a new approach, learning FAir Representation via distributional CONtrastive Variational AutoEncoder (FarconVAE)
We show superior performance on fairness, pretrained model debiasing, and domain generalization tasks from various modalities.
arXiv Detail & Related papers (2022-06-17T12:58:58Z) - Adversarial Stacked Auto-Encoders for Fair Representation Learning [1.061960673667643]
We propose a new fair representation learning approach that leverages different levels of representation of data to tighten the fairness bounds of the learned representation.
Our results show that stacking different auto-encoders and enforcing fairness at different latent spaces result in an improvement of fairness compared to other existing approaches.
arXiv Detail & Related papers (2021-07-27T13:49:18Z) - Conditional Contrastive Learning: Removing Undesirable Information in
Self-Supervised Representations [108.29288034509305]
We develop conditional contrastive learning to remove undesirable information in self-supervised representations.
We demonstrate empirically that our methods can successfully learn self-supervised representations for downstream tasks.
arXiv Detail & Related papers (2021-06-05T10:51:26Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.