Fair Representations by Compression
- URL: http://arxiv.org/abs/2105.14044v1
- Date: Fri, 28 May 2021 18:22:07 GMT
- Title: Fair Representations by Compression
- Authors: Xavier Gitiaux, Huzefa Rangwala
- Abstract summary: We show that a parsimonious representation should filter out information related to sensitive attributes if they are provided directly to the decoder.
Explicit control of the entropy of the representation bit stream allows the user to move smoothly and simultaneously along both rate-distortion and rate-fairness curves.
- Score: 19.26754855778295
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Organizations that collect and sell data face increasing scrutiny for the
discriminatory use of data. We propose a novel unsupervised approach to
transform data into a compressed binary representation independent of sensitive
attributes. We show that in an information bottleneck framework, a parsimonious
representation should filter out information related to sensitive attributes if
they are provided directly to the decoder. Empirical results show that the
proposed method, \textbf{FBC}, achieves state-of-the-art accuracy-fairness
trade-off. Explicit control of the entropy of the representation bit stream
allows the user to move smoothly and simultaneously along both rate-distortion
and rate-fairness curves. \end{abstract}
Related papers
- Provable Optimization for Adversarial Fair Self-supervised Contrastive Learning [49.417414031031264]
This paper studies learning fair encoders in a self-supervised learning setting.
All data are unlabeled and only a small portion of them are annotated with sensitive attributes.
arXiv Detail & Related papers (2024-06-09T08:11:12Z) - Closed-Loop Unsupervised Representation Disentanglement with $\beta$-VAE
Distillation and Diffusion Probabilistic Feedback [45.68054456449699]
Representation disentanglement may help AI fundamentally understand the real world and thus benefit both discrimination and generation tasks.
We propose a textbfCL-Disentanglement approach dubbed textbfCL-Dis.
Experiments demonstrate the superiority of CL-Dis on applications like real image manipulation and visual analysis.
arXiv Detail & Related papers (2024-02-04T05:03:22Z) - Disentangled Representation Learning with Transmitted Information Bottleneck [57.22757813140418]
We present textbfDisTIB (textbfTransmitted textbfInformation textbfBottleneck for textbfDisd representation learning), a novel objective that navigates the balance between information compression and preservation.
arXiv Detail & Related papers (2023-11-03T03:18:40Z) - FedCiR: Client-Invariant Representation Learning for Federated Non-IID
Features [15.555538379806135]
Federated learning (FL) is a distributed learning paradigm that maximizes the potential of data-driven models for edge devices without sharing their raw data.
We propose FedCiR, a client-invariant representation learning framework that enables clients to extract informative and client-invariant features.
arXiv Detail & Related papers (2023-08-30T06:36:32Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - Disentangling representations in Restricted Boltzmann Machines without
adversaries [0.0]
We propose a simple, effective way of disentangling representations without any need to train adversarial discriminators.
We show how our framework allows for computing the cost, in terms of log-likelihood of the data, associated to the disentanglement of their representations.
arXiv Detail & Related papers (2022-06-23T10:24:20Z) - Semi-FairVAE: Semi-supervised Fair Representation Learning with
Adversarial Variational Autoencoder [92.67156911466397]
We propose a semi-supervised fair representation learning approach based on adversarial variational autoencoder.
We use a bias-aware model to capture inherent bias information on sensitive attribute.
We also use a bias-free model to learn debiased fair representations by using adversarial learning to remove bias information from them.
arXiv Detail & Related papers (2022-04-01T15:57:47Z) - Semi-supervised Long-tailed Recognition using Alternate Sampling [95.93760490301395]
Main challenges in long-tailed recognition come from the imbalanced data distribution and sample scarcity in its tail classes.
We propose a new recognition setting, namely semi-supervised long-tailed recognition.
We demonstrate significant accuracy improvements over other competitive methods on two datasets.
arXiv Detail & Related papers (2021-05-01T00:43:38Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z) - Learning Smooth and Fair Representations [24.305894478899948]
This paper explores the ability to preemptively remove the correlations between features and sensitive attributes by mapping features to a fair representation space.
Empirically, we find that smoothing the representation distribution provides generalization guarantees of fairness certificates.
We do not observe that smoothing the representation distribution degrades the accuracy of downstream tasks compared to state-of-the-art methods in fair representation learning.
arXiv Detail & Related papers (2020-06-15T21:51:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.