A Generative Approach for Mitigating Structural Biases in Natural
Language Inference
- URL: http://arxiv.org/abs/2108.14006v1
- Date: Tue, 31 Aug 2021 17:59:45 GMT
- Title: A Generative Approach for Mitigating Structural Biases in Natural
Language Inference
- Authors: Dimion Asael, Zachary Ziegler, Yonatan Belinkov
- Abstract summary: In this work, we reformulate the NLI task as a generative task, where a model is conditioned on the biased subset of the input and the label.
We show that this approach is highly robust to large amounts of bias.
We find that generative models are difficult to train and they generally perform worse than discriminative baselines.
- Score: 24.44419010439227
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many natural language inference (NLI) datasets contain biases that allow
models to perform well by only using a biased subset of the input, without
considering the remainder features. For instance, models are able to make a
classification decision by only using the hypothesis, without learning the true
relationship between it and the premise. These structural biases lead
discriminative models to learn unintended superficial features and to
generalize poorly out of the training distribution. In this work, we
reformulate the NLI task as a generative task, where a model is conditioned on
the biased subset of the input and the label and generates the remaining subset
of the input. We show that by imposing a uniform prior, we obtain a provably
unbiased model. Through synthetic experiments, we find that this approach is
highly robust to large amounts of bias. We then demonstrate empirically on two
types of natural bias that this approach leads to fully unbiased models in
practice. However, we find that generative models are difficult to train and
they generally perform worse than discriminative baselines. We highlight the
difficulty of the generative modeling task in the context of NLI as a cause for
this worse performance. Finally, by fine-tuning the generative model with a
discriminative objective, we reduce the performance gap between the generative
model and the discriminative baseline, while allowing for a small amount of
bias.
Related papers
- Improving Bias Mitigation through Bias Experts in Natural Language
Understanding [10.363406065066538]
We propose a new debiasing framework that introduces binary classifiers between the auxiliary model and the main model.
Our proposed strategy improves the bias identification ability of the auxiliary model.
arXiv Detail & Related papers (2023-12-06T16:15:00Z) - Echoes: Unsupervised Debiasing via Pseudo-bias Labeling in an Echo
Chamber [17.034228910493056]
This paper presents experimental analyses revealing that the existing biased models overfit to bias-conflicting samples in the training data.
We propose a straightforward and effective method called Echoes, which trains a biased model and a target model with a different strategy.
Our approach achieves superior debiasing results compared to the existing baselines on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-06T13:13:18Z) - Debiasing Vision-Language Models via Biased Prompts [79.04467131711775]
We propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding.
We show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models.
arXiv Detail & Related papers (2023-01-31T20:09:33Z) - Feature-Level Debiased Natural Language Understanding [86.8751772146264]
Existing natural language understanding (NLU) models often rely on dataset biases to achieve high performance on specific datasets.
We propose debiasing contrastive learning (DCT) to mitigate biased latent features and neglect the dynamic nature of bias.
DCT outperforms state-of-the-art baselines on out-of-distribution datasets while maintaining in-distribution performance.
arXiv Detail & Related papers (2022-12-11T06:16:14Z) - Investigating Ensemble Methods for Model Robustness Improvement of Text
Classifiers [66.36045164286854]
We analyze a set of existing bias features and demonstrate there is no single model that works best for all the cases.
By choosing an appropriate bias model, we can obtain a better robustness result than baselines with a more sophisticated model design.
arXiv Detail & Related papers (2022-10-28T17:52:10Z) - Learning from others' mistakes: Avoiding dataset biases without modeling
them [111.17078939377313]
State-of-the-art natural language processing (NLP) models often learn to model dataset biases and surface form correlations instead of features that target the intended task.
Previous work has demonstrated effective methods to circumvent these issues when knowledge of the bias is available.
We show a method for training models that learn to ignore these problematic correlations.
arXiv Detail & Related papers (2020-12-02T16:10:54Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Towards Robustifying NLI Models Against Lexical Dataset Biases [94.79704960296108]
This paper explores both data-level and model-level debiasing methods to robustify models against lexical dataset biases.
First, we debias the dataset through data augmentation and enhancement, but show that the model bias cannot be fully removed via this method.
The second approach employs a bag-of-words sub-model to capture the features that are likely to exploit the bias and prevents the original model from learning these biased features.
arXiv Detail & Related papers (2020-05-10T17:56:10Z) - HypoNLI: Exploring the Artificial Patterns of Hypothesis-only Bias in
Natural Language Inference [38.14399396661415]
We derive adversarial examples in terms of the hypothesis-only bias.
We investigate two debiasing approaches which exploit the artificial pattern modeling to mitigate such hypothesis-only bias.
arXiv Detail & Related papers (2020-03-05T16:46:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.