Leveraging Affirmative Interpretations from Negation Improves Natural
Language Understanding
- URL: http://arxiv.org/abs/2210.14486v1
- Date: Wed, 26 Oct 2022 05:22:27 GMT
- Title: Leveraging Affirmative Interpretations from Negation Improves Natural
Language Understanding
- Authors: Md Mosharaf Hossain and Eduardo Blanco
- Abstract summary: Negation poses a challenge in many natural language understanding tasks.
We show that doing so benefits models for three natural language understanding tasks.
We build a plug-and-play neural generator that given a negated statement generates an affirmative interpretation.
- Score: 10.440501875161003
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Negation poses a challenge in many natural language understanding tasks.
Inspired by the fact that understanding a negated statement often requires
humans to infer affirmative interpretations, in this paper we show that doing
so benefits models for three natural language understanding tasks. We present
an automated procedure to collect pairs of sentences with negation and their
affirmative interpretations, resulting in over 150,000 pairs. Experimental
results show that leveraging these pairs helps (a) T5 generate affirmative
interpretations from negations in a previous benchmark, and (b) a RoBERTa-based
classifier solve the task of natural language inference. We also leverage our
pairs to build a plug-and-play neural generator that given a negated statement
generates an affirmative interpretation. Then, we incorporate the pretrained
generator into a RoBERTa-based classifier for sentiment analysis and show that
doing so improves the results. Crucially, our proposal does not require any
manual effort.
Related papers
- Paraphrasing in Affirmative Terms Improves Negation Understanding [9.818585902859363]
Negation is a common linguistic phenomenon.
We show improvements with CondaQA, a large corpus requiring reasoning with negation, and five natural language understanding tasks.
arXiv Detail & Related papers (2024-06-11T17:30:03Z) - Negation Triplet Extraction with Syntactic Dependency and Semantic Consistency [37.99421732397288]
SSENE is built based on a generative pretrained language model (PLM) of-Decoder architecture with a multi-task learning framework.
We have constructed a high-quality Chinese dataset NegComment based on the users' reviews from the real-world platform of Meituan.
arXiv Detail & Related papers (2024-04-15T14:28:33Z) - Deep Natural Language Feature Learning for Interpretable Prediction [1.6114012813668932]
We propose a method to break down a main complex task into a set of intermediary easier sub-tasks.
Our method allows for representing each example by a vector consisting of the answers to these questions.
We have successfully applied this method to two completely different tasks: detecting incoherence in students' answers to open-ended mathematics exam questions, and screening abstracts for a systematic literature review of scientific papers on climate change and agroecology.
arXiv Detail & Related papers (2023-11-09T21:43:27Z) - We're Afraid Language Models Aren't Modeling Ambiguity [136.8068419824318]
Managing ambiguity is a key part of human language understanding.
We characterize ambiguity in a sentence by its effect on entailment relations with another sentence.
We show that a multilabel NLI model can flag political claims in the wild that are misleading due to ambiguity.
arXiv Detail & Related papers (2023-04-27T17:57:58Z) - CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about
Negation [21.56001677478673]
We present the first English reading comprehension dataset which requires reasoning about the implications of negated statements in paragraphs.
CONDAQA features 14,182 question-answer pairs with over 200 unique negation cues.
The best performing model on CONDAQA (UnifiedQA-v2-3b) achieves only 42% on our consistency metric, well below human performance which is 81%.
arXiv Detail & Related papers (2022-11-01T06:10:26Z) - Not another Negation Benchmark: The NaN-NLI Test Suite for Sub-clausal
Negation [59.307534363825816]
Negation is poorly captured by current language models, although the extent of this problem is not widely understood.
We introduce a natural language inference (NLI) test suite to enable probing the capabilities of NLP methods.
arXiv Detail & Related papers (2022-10-06T23:39:01Z) - A Question-Answer Driven Approach to Reveal Affirmative Interpretations
from Verbal Negations [6.029488932793797]
We create a new corpus consisting of 4,472 verbal negations and discover that 67.1% of them convey that an event actually occurred.
Annotators generate and answer 7,277 questions for the 3,001 negations that convey an affirmative interpretation.
arXiv Detail & Related papers (2022-05-23T17:08:30Z) - The Unreliability of Explanations in Few-Shot In-Context Learning [50.77996380021221]
We focus on two NLP tasks that involve reasoning over text, namely question answering and natural language inference.
We show that explanations judged as good by humans--those that are logically consistent with the input--usually indicate more accurate predictions.
We present a framework for calibrating model predictions based on the reliability of the explanations.
arXiv Detail & Related papers (2022-05-06T17:57:58Z) - Probing as Quantifying the Inductive Bias of Pre-trained Representations [99.93552997506438]
We present a novel framework for probing where the goal is to evaluate the inductive bias of representations for a particular task.
We apply our framework to a series of token-, arc-, and sentence-level tasks.
arXiv Detail & Related papers (2021-10-15T22:01:16Z) - Understanding by Understanding Not: Modeling Negation in Language Models [81.21351681735973]
Negation is a core construction in natural language.
We propose to augment the language modeling objective with an unlikelihood objective that is based on negated generic sentences.
We reduce the mean top1 error rate to 4% on the negated LAMA dataset.
arXiv Detail & Related papers (2021-05-07T21:58:35Z) - Syntactic Structure Distillation Pretraining For Bidirectional Encoders [49.483357228441434]
We introduce a knowledge distillation strategy for injecting syntactic biases into BERT pretraining.
We distill the approximate marginal distribution over words in context from the syntactic LM.
Our findings demonstrate the benefits of syntactic biases, even in representation learners that exploit large amounts of data.
arXiv Detail & Related papers (2020-05-27T16:44:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.