Looking Beyond Sentence-Level Natural Language Inference for Downstream
Tasks
- URL: http://arxiv.org/abs/2009.09099v1
- Date: Fri, 18 Sep 2020 21:44:35 GMT
- Title: Looking Beyond Sentence-Level Natural Language Inference for Downstream
Tasks
- Authors: Anshuman Mishra, Dhruvesh Patel, Aparna Vijayakumar, Xiang Li, Pavan
Kapanipathi, Kartik Talamadupula
- Abstract summary: In recent years, the Natural Language Inference (NLI) task has garnered significant attention.
We study this unfulfilled promise from the lens of two downstream tasks: question answering (QA), and text summarization.
We conjecture that a key difference between the NLI datasets and these downstream tasks concerns the length of the premise.
- Score: 15.624486319943015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, the Natural Language Inference (NLI) task has garnered
significant attention, with new datasets and models achieving near human-level
performance on it. However, the full promise of NLI -- particularly that it
learns knowledge that should be generalizable to other downstream NLP tasks --
has not been realized. In this paper, we study this unfulfilled promise from
the lens of two downstream tasks: question answering (QA), and text
summarization. We conjecture that a key difference between the NLI datasets and
these downstream tasks concerns the length of the premise; and that creating
new long premise NLI datasets out of existing QA datasets is a promising avenue
for training a truly generalizable NLI model. We validate our conjecture by
showing competitive results on the task of QA and obtaining the best reported
results on the task of Checking Factual Correctness of Summaries.
Related papers
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning [70.21358720599821]
Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts.
We propose SELF-GUIDE, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM.
We report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics.
arXiv Detail & Related papers (2024-07-16T04:41:58Z) - MinPrompt: Graph-based Minimal Prompt Data Augmentation for Few-shot Question Answering [64.6741991162092]
We present MinPrompt, a minimal data augmentation framework for open-domain question answering.
We transform the raw text into a graph structure to build connections between different factual sentences.
We then apply graph algorithms to identify the minimal set of sentences needed to cover the most information in the raw text.
We generate QA pairs based on the identified sentence subset and train the model on the selected sentences to obtain the final model.
arXiv Detail & Related papers (2023-10-08T04:44:36Z) - With a Little Push, NLI Models can Robustly and Efficiently Predict
Faithfulness [19.79160738554967]
Conditional language models still generate unfaithful output that is not supported by their input.
We show that pure NLI models can outperform more complex metrics when combining task-adaptive data augmentation with robust inference procedures.
arXiv Detail & Related papers (2023-05-26T11:00:04Z) - Falsesum: Generating Document-level NLI Examples for Recognizing Factual
Inconsistency in Summarization [63.21819285337555]
We show that NLI models can be effective for this task when the training data is augmented with high-quality task-oriented examples.
We introduce Falsesum, a data generation pipeline leveraging a controllable text generation model to perturb human-annotated summaries.
We show that models trained on a Falsesum-augmented NLI dataset improve the state-of-the-art performance across four benchmarks for detecting factual inconsistency in summarization.
arXiv Detail & Related papers (2022-05-12T10:43:42Z) - Stretching Sentence-pair NLI Models to Reason over Long Documents and
Clusters [35.103851212995046]
Natural Language Inference (NLI) has been extensively studied by the NLP community as a framework for estimating the semantic relation between sentence pairs.
We explore the direct zero-shot applicability of NLI models to real applications, beyond the sentence-pair setting they were trained on.
We develop new aggregation methods to allow operating over full documents, reaching state-of-the-art performance on the ContractNLI dataset.
arXiv Detail & Related papers (2022-04-15T12:56:39Z) - DocNLI: A Large-scale Dataset for Document-level Natural Language
Inference [55.868482696821815]
Natural language inference (NLI) is formulated as a unified framework for solving various NLP problems.
This work presents DocNLI -- a newly-constructed large-scale dataset for document-level NLI.
arXiv Detail & Related papers (2021-06-17T13:02:26Z) - An Empirical Survey of Data Augmentation for Limited Data Learning in
NLP [88.65488361532158]
dependence on abundant data prevents NLP models from being applied to low-resource settings or novel tasks.
Data augmentation methods have been explored as a means of improving data efficiency in NLP.
We provide an empirical survey of recent progress on data augmentation for NLP in the limited labeled data setting.
arXiv Detail & Related papers (2021-06-14T15:27:22Z) - Reading Comprehension as Natural Language Inference: A Semantic Analysis [15.624486319943015]
We explore the utility of Natural language Inference (NLI) for Question Answering (QA)
We transform the one of the largest available MRC dataset (RACE) to an NLI form, and compare the performances of a state-of-the-art model (RoBERTa) on both forms.
We highlight clear categories for which the model is able to perform better when the data is presented in a coherent entailment form, and a structured question-answer concatenation form.
arXiv Detail & Related papers (2020-10-04T22:50:59Z) - Exploring and Predicting Transferability across NLP Tasks [115.6278033699853]
We study the transferability between 33 NLP tasks across three broad classes of problems.
Our results show that transfer learning is more beneficial than previously thought.
We also develop task embeddings that can be used to predict the most transferable source tasks for a given target task.
arXiv Detail & Related papers (2020-05-02T09:39:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.