NLI Data Sanity Check: Assessing the Effect of Data Corruption on Model
Performance
- URL: http://arxiv.org/abs/2104.04751v1
- Date: Sat, 10 Apr 2021 12:28:07 GMT
- Title: NLI Data Sanity Check: Assessing the Effect of Data Corruption on Model
Performance
- Authors: Aarne Talman, Marianna Apidianaki, Stergios Chatzikyriakidis, J\"org
Tiedemann
- Abstract summary: We propose a new diagnostics test suite which allows to assess whether a dataset constitutes a good testbed for evaluating the models' meaning understanding capabilities.
We specifically apply controlled corruption transformations to widely used benchmarks (MNLI and ANLI)
A large decrease in model accuracy indicates that the original dataset provides a proper challenge to the models' reasoning capabilities.
- Score: 3.7024660695776066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pre-trained neural language models give high performance on natural language
inference (NLI) tasks. But whether they actually understand the meaning of the
processed sequences remains unclear. We propose a new diagnostics test suite
which allows to assess whether a dataset constitutes a good testbed for
evaluating the models' meaning understanding capabilities. We specifically
apply controlled corruption transformations to widely used benchmarks (MNLI and
ANLI), which involve removing entire word classes and often lead to
non-sensical sentence pairs. If model accuracy on the corrupted data remains
high, then the dataset is likely to contain statistical biases and artefacts
that guide prediction. Inversely, a large decrease in model accuracy indicates
that the original dataset provides a proper challenge to the models' reasoning
capabilities. Hence, our proposed controls can serve as a crash test for
developing high quality data for NLI tasks.
Related papers
- Measuring and Improving Attentiveness to Partial Inputs with
Counterfactuals [95.5442607785241]
We propose a new evaluation method, Counterfactual Attentiveness Test (CAT)
CAT uses counterfactuals by replacing part of the input with its counterpart from a different example, expecting an attentive model to change its prediction.
We show that GPT3 becomes less attentive with an increased number of demonstrations, while its accuracy on the test data improves.
arXiv Detail & Related papers (2023-11-16T06:27:35Z) - Towards preserving word order importance through Forced Invalidation [80.33036864442182]
We show that pre-trained language models are insensitive to word order.
We propose Forced Invalidation to help preserve the importance of word order.
Our experiments demonstrate that Forced Invalidation significantly improves the sensitivity of the models to word order.
arXiv Detail & Related papers (2023-04-11T13:42:10Z) - WeCheck: Strong Factual Consistency Checker via Weakly Supervised
Learning [40.5830891229718]
We propose a weakly supervised framework that aggregates multiple resources to train a precise and efficient factual metric, namely WeCheck.
Comprehensive experiments on a variety of tasks demonstrate the strong performance of WeCheck, which achieves a 3.4% absolute improvement over previous state-of-the-art methods on TRUE benchmark on average.
arXiv Detail & Related papers (2022-12-20T08:04:36Z) - Discover, Explanation, Improvement: An Automatic Slice Detection
Framework for Natural Language Processing [72.14557106085284]
slice detection models (SDM) automatically identify underperforming groups of datapoints.
This paper proposes a benchmark named "Discover, Explain, improve (DEIM)" for classification NLP tasks.
Our evaluation shows that Edisa can accurately select error-prone datapoints with informative semantic features.
arXiv Detail & Related papers (2022-11-08T19:00:00Z) - Robust self-healing prediction model for high dimensional data [0.685316573653194]
This work proposes a robust self healing (RSH) hybrid prediction model.
It functions by using the data in its entirety by removing errors and inconsistencies from it rather than discarding any data.
The proposed method is compared with some of the existing high performing models and the results are analyzed.
arXiv Detail & Related papers (2022-10-04T17:55:50Z) - Falsesum: Generating Document-level NLI Examples for Recognizing Factual
Inconsistency in Summarization [63.21819285337555]
We show that NLI models can be effective for this task when the training data is augmented with high-quality task-oriented examples.
We introduce Falsesum, a data generation pipeline leveraging a controllable text generation model to perturb human-annotated summaries.
We show that models trained on a Falsesum-augmented NLI dataset improve the state-of-the-art performance across four benchmarks for detecting factual inconsistency in summarization.
arXiv Detail & Related papers (2022-05-12T10:43:42Z) - How Does Data Corruption Affect Natural Language Understanding Models? A
Study on GLUE datasets [4.645287693363387]
We show that performance remains high for most GLUE tasks when the models are fine-tuned or tested on corrupted data.
Our proposed data transformations can be used as a diagnostic tool for assessing the extent to which a specific dataset constitutes a proper testbed for evaluating models' language understanding capabilities.
arXiv Detail & Related papers (2022-01-12T13:35:53Z) - Automatically Identifying Semantic Bias in Crowdsourced Natural Language
Inference Datasets [78.6856732729301]
We introduce a model-driven, unsupervised technique to find "bias clusters" in a learned embedding space of hypotheses in NLI datasets.
interventions and additional rounds of labeling can be performed to ameliorate the semantic bias of the hypothesis distribution of a dataset.
arXiv Detail & Related papers (2021-12-16T22:49:01Z) - Evaluating the Robustness of Neural Language Models to Input
Perturbations [7.064032374579076]
In this study, we design and implement various types of character-level and word-level perturbation methods to simulate noisy input texts.
We investigate the ability of high-performance language models such as BERT, XLNet, RoBERTa, and ELMo in handling different types of input perturbations.
The results suggest that language models are sensitive to input perturbations and their performance can decrease even when small changes are introduced.
arXiv Detail & Related papers (2021-08-27T12:31:17Z) - Comparing Test Sets with Item Response Theory [53.755064720563]
We evaluate 29 datasets using predictions from 18 pretrained Transformer models on individual test examples.
We find that Quoref, HellaSwag, and MC-TACO are best suited for distinguishing among state-of-the-art models.
We also observe span selection task format, which is used for QA datasets like QAMR or SQuAD2.0, is effective in differentiating between strong and weak models.
arXiv Detail & Related papers (2021-06-01T22:33:53Z) - Benchmarking Popular Classification Models' Robustness to Random and
Targeted Corruptions [9.564145822310897]
Text classification models, especially neural networks based models, have reached very high accuracy on many popular benchmark datasets.
Yet, such models when deployed in real world applications, tend to perform badly.
This emphasizes the need for a model agnostic test dataset, which consists of various corruptions that are natural to appear in the wild.
arXiv Detail & Related papers (2020-01-31T11:54:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.