Contrastive Examples for Addressing the Tyranny of the Majority
- URL: http://arxiv.org/abs/2004.06524v1
- Date: Tue, 14 Apr 2020 14:06:44 GMT
- Title: Contrastive Examples for Addressing the Tyranny of the Majority
- Authors: Viktoriia Sharmanska, Lisa Anne Hendricks, Trevor Darrell, Novi
Quadrianto
- Abstract summary: We propose to create a balanced training dataset, consisting of the original dataset plus new data points in which the group memberships are intervened.
We show that current generative adversarial networks are a powerful tool for learning these data points, called contrastive examples.
- Score: 83.93825214500131
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computer vision algorithms, e.g. for face recognition, favour groups of
individuals that are better represented in the training data. This happens
because of the generalization that classifiers have to make. It is simpler to
fit the majority groups as this fit is more important to overall error. We
propose to create a balanced training dataset, consisting of the original
dataset plus new data points in which the group memberships are intervened,
minorities become majorities and vice versa. We show that current generative
adversarial networks are a powerful tool for learning these data points, called
contrastive examples. We experiment with the equalized odds bias measure on
tabular data as well as image data (CelebA and Diversity in Faces datasets).
Contrastive examples allow us to expose correlations between group membership
and other seemingly neutral features. Whenever a causal graph is available, we
can put those contrastive examples in the perspective of counterfactuals.
Related papers
- Dataset Representativeness and Downstream Task Fairness [24.570493924073524]
We demonstrate that there is a natural tension between dataset representativeness and group-fairness of classifiers trained on that dataset.
We also find that over-sampling underrepresented groups can result in classifiers which exhibit greater bias to those groups.
arXiv Detail & Related papers (2024-06-28T18:11:16Z) - Mitigating Algorithmic Bias on Facial Expression Recognition [0.0]
Biased datasets are ubiquitous and present a challenge for machine learning.
The problem of biased datasets is especially sensitive when dealing with minority people groups.
This work explores one way to mitigate bias using a debiasing variational autoencoder with experiments on facial expression recognition.
arXiv Detail & Related papers (2023-12-23T17:41:30Z) - Deep Learning on a Healthy Data Diet: Finding Important Examples for
Fairness [15.210232622716129]
Data-driven predictive solutions predominant in commercial applications tend to suffer from biases and stereotypes.
Data augmentation reduces gender bias by adding counterfactual examples to the training dataset.
We show that some of the examples in the augmented dataset can be not important or even harmful for fairness.
arXiv Detail & Related papers (2022-11-20T22:42:30Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Towards Group Robustness in the presence of Partial Group Labels [61.33713547766866]
spurious correlations between input samples and the target labels wrongly direct the neural network predictions.
We propose an algorithm that optimize for the worst-off group assignments from a constraint set.
We show improvements in the minority group's performance while preserving overall aggregate accuracy across groups.
arXiv Detail & Related papers (2022-01-10T22:04:48Z) - Examining and Combating Spurious Features under Distribution Shift [94.31956965507085]
We define and analyze robust and spurious representations using the information-theoretic concept of minimal sufficient statistics.
We prove that even when there is only bias of the input distribution, models can still pick up spurious features from their training data.
Inspired by our analysis, we demonstrate that group DRO can fail when groups do not directly account for various spurious correlations.
arXiv Detail & Related papers (2021-06-14T05:39:09Z) - Enhancing Facial Data Diversity with Style-based Face Aging [59.984134070735934]
In particular, face datasets are typically biased in terms of attributes such as gender, age, and race.
We propose a novel, generative style-based architecture for data augmentation that captures fine-grained aging patterns.
We show that the proposed method outperforms state-of-the-art algorithms for age transfer.
arXiv Detail & Related papers (2020-06-06T21:53:44Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.