Fair Attribute Classification through Latent Space De-biasing
- URL: http://arxiv.org/abs/2012.01469v3
- Date: Fri, 2 Apr 2021 17:57:47 GMT
- Title: Fair Attribute Classification through Latent Space De-biasing
- Authors: Vikram V. Ramaswamy, Sunnie S. Y. Kim and Olga Russakovsky
- Abstract summary: We introduce a method for training accurate target classifiers while mitigating biases that stem from correlations.
We use GANs to generate realistic-looking images, and perturb these images in the underlying latent space to generate training data that is balanced for each protected attribute.
We conduct a thorough evaluation across multiple target labels and protected attributes in the CelebA dataset, and provide an in-depth analysis and comparison to existing literature in the space.
- Score: 17.647146032798005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness in visual recognition is becoming a prominent and critical topic of
discussion as recognition systems are deployed at scale in the real world.
Models trained from data in which target labels are correlated with protected
attributes (e.g., gender, race) are known to learn and exploit those
correlations. In this work, we introduce a method for training accurate target
classifiers while mitigating biases that stem from these correlations. We use
GANs to generate realistic-looking images, and perturb these images in the
underlying latent space to generate training data that is balanced for each
protected attribute. We augment the original dataset with this perturbed
generated data, and empirically demonstrate that target classifiers trained on
the augmented dataset exhibit a number of both quantitative and qualitative
benefits. We conduct a thorough evaluation across multiple target labels and
protected attributes in the CelebA dataset, and provide an in-depth analysis
and comparison to existing literature in the space.
Related papers
- Granularity Matters in Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - Mitigating Bias Using Model-Agnostic Data Attribution [2.9868610316099335]
Mitigating bias in machine learning models is a critical endeavor for ensuring fairness and equity.
We propose a novel approach to address bias by leveraging pixel image attributions to identify and regularize regions of images containing bias attributes.
arXiv Detail & Related papers (2024-05-08T13:00:56Z) - Leveraging vision-language models for fair facial attribute classification [19.93324644519412]
General-purpose vision-language model (VLM) is a rich knowledge source for common sensitive attributes.
We analyze the correspondence between VLM predicted and human defined sensitive attribute distribution.
Experiments on multiple benchmark facial attribute classification datasets show fairness gains of the model over existing unsupervised baselines.
arXiv Detail & Related papers (2024-03-15T18:37:15Z) - Memory Consistency Guided Divide-and-Conquer Learning for Generalized
Category Discovery [56.172872410834664]
Generalized category discovery (GCD) aims at addressing a more realistic and challenging setting of semi-supervised learning.
We propose a Memory Consistency guided Divide-and-conquer Learning framework (MCDL)
Our method outperforms state-of-the-art models by a large margin on both seen and unseen classes of the generic image recognition.
arXiv Detail & Related papers (2024-01-24T09:39:45Z) - A Self Supervised StyleGAN for Image Annotation and Classification with
Extremely Limited Labels [35.43549147657739]
We propose SS-StyleGAN, a self-supervised approach for image annotation and classification suitable for extremely small annotated datasets.
We show that the proposed method attains strong classification results using small labeled datasets of sizes 50 and even 10.
arXiv Detail & Related papers (2023-12-26T09:46:50Z) - Improving Fairness using Vision-Language Driven Image Augmentation [60.428157003498995]
Fairness is crucial when training a deep-learning discriminative model, especially in the facial domain.
Models tend to correlate specific characteristics (such as age and skin color) with unrelated attributes (downstream tasks)
This paper proposes a method to mitigate these correlations to improve fairness.
arXiv Detail & Related papers (2023-11-02T19:51:10Z) - Data AUDIT: Identifying Attribute Utility- and Detectability-Induced
Bias in Task Models [8.420252576694583]
We present a first technique for the rigorous, quantitative screening of medical image datasets.
Our method decomposes the risks associated with dataset attributes in terms of their detectability and utility.
Using our method, we show our screening method reliably identifies nearly imperceptible bias-inducing artifacts.
arXiv Detail & Related papers (2023-04-06T16:50:15Z) - Beyond Separability: Analyzing the Linear Transferability of Contrastive
Representations to Related Subpopulations [50.33975968859988]
Contrastive learning is a highly effective method which uses unlabeled data to produce representations which are linearly separable for downstream classification tasks.
Recent works have shown that contrastive representations are not only useful when data come from a single domain, but are also effective for transferring across domains.
arXiv Detail & Related papers (2022-04-06T09:10:23Z) - Combining Feature and Instance Attribution to Detect Artifacts [62.63504976810927]
We propose methods to facilitate identification of training data artifacts.
We show that this proposed training-feature attribution approach can be used to uncover artifacts in training data.
We execute a small user study to evaluate whether these methods are useful to NLP researchers in practice.
arXiv Detail & Related papers (2021-07-01T09:26:13Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - Matched sample selection with GANs for mitigating attribute confounding [30.488267816304177]
We propose a matching approach that selects a subset of images from the full dataset with balanced attribute distributions across protected attributes.
Our matching approach first projects real images onto a generative network's latent space in a manner that preserves semantic attributes.
It then finds adversarial matches in this latent space across a chosen protected attribute, yielding a dataset where semantic and perceptual attributes are balanced across the protected attribute.
arXiv Detail & Related papers (2021-03-24T19:18:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.