Right for the Right Latent Factors: Debiasing Generative Models via
Disentanglement
- URL: http://arxiv.org/abs/2202.00391v1
- Date: Tue, 1 Feb 2022 13:16:18 GMT
- Title: Right for the Right Latent Factors: Debiasing Generative Models via
Disentanglement
- Authors: Xiaoting Shao, Karl Stelzner, Kristian Kersting
- Abstract summary: Key assumption of most statistical machine learning methods is that they have access to independent samples from the distribution of data they encounter at test time.
In particular, machine learning models have been shown to exhibit Clever-Hans-like behaviour, meaning that spurious correlations in the training set are inadvertently learnt.
We propose to debias generative models by disentangling their internal representations, which is achieved via human feedback.
- Score: 20.41752850243945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A key assumption of most statistical machine learning methods is that they
have access to independent samples from the distribution of data they encounter
at test time. As such, these methods often perform poorly in the face of biased
data, which breaks this assumption. In particular, machine learning models have
been shown to exhibit Clever-Hans-like behaviour, meaning that spurious
correlations in the training set are inadvertently learnt. A number of works
have been proposed to revise deep classifiers to learn the right correlations.
However, generative models have been overlooked so far. We observe that
generative models are also prone to Clever-Hans-like behaviour. To counteract
this issue, we propose to debias generative models by disentangling their
internal representations, which is achieved via human feedback. Our experiments
show that this is effective at removing bias even when human feedback covers
only a small fraction of the desired distribution. In addition, we achieve
strong disentanglement results in a quantitative comparison with recent
methods.
Related papers
- Debiasing Multimodal Models via Causal Information Minimization [65.23982806840182]
We study bias arising from confounders in a causal graph for multimodal data.
Robust predictive features contain diverse information that helps a model generalize to out-of-distribution data.
We use these features as confounder representations and use them via methods motivated by causal theory to remove bias from models.
arXiv Detail & Related papers (2023-11-28T16:46:14Z) - Stubborn Lexical Bias in Data and Models [50.79738900885665]
We use a new statistical method to examine whether spurious patterns in data appear in models trained on the data.
We apply an optimization approach to *reweight* the training data, reducing thousands of spurious correlations.
Surprisingly, though this method can successfully reduce lexical biases in the training data, we still find strong evidence of corresponding bias in the trained models.
arXiv Detail & Related papers (2023-06-03T20:12:27Z) - An Exploration of How Training Set Composition Bias in Machine Learning
Affects Identifying Rare Objects [0.0]
It is common to up-weight the examples of the rare class to ensure it isn't ignored.
It is also a frequent practice to train on restricted data where the balance of source types is closer to equal.
Here we show that these practices can bias the model toward over-assigning sources to the rare class.
arXiv Detail & Related papers (2022-07-07T10:26:55Z) - Don't Discard All the Biased Instances: Investigating a Core Assumption
in Dataset Bias Mitigation Techniques [19.252319300590656]
Existing techniques for mitigating dataset bias often leverage a biased model to identify biased instances.
The role of these biased instances is then reduced during the training of the main model to enhance its robustness to out-of-distribution data.
In this paper, we show that this assumption does not hold in general.
arXiv Detail & Related papers (2021-09-01T10:25:46Z) - A Generative Approach for Mitigating Structural Biases in Natural
Language Inference [24.44419010439227]
In this work, we reformulate the NLI task as a generative task, where a model is conditioned on the biased subset of the input and the label.
We show that this approach is highly robust to large amounts of bias.
We find that generative models are difficult to train and they generally perform worse than discriminative baselines.
arXiv Detail & Related papers (2021-08-31T17:59:45Z) - Why do classifier accuracies show linear trends under distribution
shift? [58.40438263312526]
accuracies of models on one data distribution are approximately linear functions of the accuracies on another distribution.
We assume the probability that two models agree in their predictions is higher than what we can infer from their accuracy levels alone.
We show that a linear trend must occur when evaluating models on two distributions unless the size of the distribution shift is large.
arXiv Detail & Related papers (2020-12-31T07:24:30Z) - Learning from others' mistakes: Avoiding dataset biases without modeling
them [111.17078939377313]
State-of-the-art natural language processing (NLP) models often learn to model dataset biases and surface form correlations instead of features that target the intended task.
Previous work has demonstrated effective methods to circumvent these issues when knowledge of the bias is available.
We show a method for training models that learn to ignore these problematic correlations.
arXiv Detail & Related papers (2020-12-02T16:10:54Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.