A Question-Answer Driven Approach to Reveal Affirmative Interpretations
from Verbal Negations
- URL: http://arxiv.org/abs/2205.11467v1
- Date: Mon, 23 May 2022 17:08:30 GMT
- Title: A Question-Answer Driven Approach to Reveal Affirmative Interpretations
from Verbal Negations
- Authors: Md Mosharaf Hossain, Luke Holman, Anusha Kakileti, Tiffany Iris Kao,
Nathan Raul Brito, Aaron Abraham Mathews, and Eduardo Blanco
- Abstract summary: We create a new corpus consisting of 4,472 verbal negations and discover that 67.1% of them convey that an event actually occurred.
Annotators generate and answer 7,277 questions for the 3,001 negations that convey an affirmative interpretation.
- Score: 6.029488932793797
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper explores a question-answer driven approach to reveal affirmative
interpretations from verbal negations (i.e., when a negation cue grammatically
modifies a verb). We create a new corpus consisting of 4,472 verbal negations
and discover that 67.1% of them convey that an event actually occurred.
Annotators generate and answer 7,277 questions for the 3,001 negations that
convey an affirmative interpretation. We first cast the problem of revealing
affirmative interpretations from negations as a natural language inference
(NLI) classification task. Experimental results show that state-of-the-art
transformers trained with existing NLI corpora are insufficient to reveal
affirmative interpretations. We also observe, however, that fine-tuning brings
small improvements. In addition to NLI classification, we also explore the more
realistic task of generating affirmative interpretations directly from
negations with the T5 transformer. We conclude that the generation task remains
a challenge as T5 substantially underperforms humans.
Related papers
- Generating Diverse Negations from Affirmative Sentences [0.999726509256195]
Negations are important in real-world applications as they encode negative polarity in verb phrases, clauses, or other expressions.
We propose NegVerse, a method that tackles the lack of negation datasets by producing a diverse range of negation types.
We provide new rules for masking parts of sentences where negations are most likely to occur, based on syntactic structure.
We also propose a filtering mechanism to identify negation cues and remove degenerate examples, producing a diverse range of meaningful perturbations.
arXiv Detail & Related papers (2024-10-30T21:25:02Z) - Paraphrasing in Affirmative Terms Improves Negation Understanding [9.818585902859363]
Negation is a common linguistic phenomenon.
We show improvements with CondaQA, a large corpus requiring reasoning with negation, and five natural language understanding tasks.
arXiv Detail & Related papers (2024-06-11T17:30:03Z) - Revisiting subword tokenization: A case study on affixal negation in large language models [57.75279238091522]
We measure the impact of affixal negation on modern English large language models (LLMs)
We conduct experiments using LLMs with different subword tokenization methods.
We show that models can, on the whole, reliably recognize the meaning of affixal negation.
arXiv Detail & Related papers (2024-04-03T03:14:27Z) - CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about
Negation [21.56001677478673]
We present the first English reading comprehension dataset which requires reasoning about the implications of negated statements in paragraphs.
CONDAQA features 14,182 question-answer pairs with over 200 unique negation cues.
The best performing model on CONDAQA (UnifiedQA-v2-3b) achieves only 42% on our consistency metric, well below human performance which is 81%.
arXiv Detail & Related papers (2022-11-01T06:10:26Z) - Leveraging Affirmative Interpretations from Negation Improves Natural
Language Understanding [10.440501875161003]
Negation poses a challenge in many natural language understanding tasks.
We show that doing so benefits models for three natural language understanding tasks.
We build a plug-and-play neural generator that given a negated statement generates an affirmative interpretation.
arXiv Detail & Related papers (2022-10-26T05:22:27Z) - Not another Negation Benchmark: The NaN-NLI Test Suite for Sub-clausal
Negation [59.307534363825816]
Negation is poorly captured by current language models, although the extent of this problem is not widely understood.
We introduce a natural language inference (NLI) test suite to enable probing the capabilities of NLP methods.
arXiv Detail & Related papers (2022-10-06T23:39:01Z) - Improving negation detection with negation-focused pre-training [58.32362243122714]
Negation is a common linguistic feature that is crucial in many language understanding tasks.
Recent work has shown that state-of-the-art NLP models underperform on samples containing negation.
We propose a new negation-focused pre-training strategy, involving targeted data augmentation and negation masking.
arXiv Detail & Related papers (2022-05-09T02:41:11Z) - An Analysis of Negation in Natural Language Understanding Corpora [10.692655009160742]
We show that popular corpora have few negations compared to general-purpose English.
Experiments show that state-of-the-art transformers trained with these corpora obtain substantially worse results with instances that contain negation.
arXiv Detail & Related papers (2022-03-16T20:31:53Z) - Provable Limitations of Acquiring Meaning from Ungrounded Form: What
will Future Language Models Understand? [87.20342701232869]
We investigate the abilities of ungrounded systems to acquire meaning.
We study whether assertions enable a system to emulate representations preserving semantic relations like equivalence.
We find that assertions enable semantic emulation if all expressions in the language are referentially transparent.
However, if the language uses non-transparent patterns like variable binding, we show that emulation can become an uncomputable problem.
arXiv Detail & Related papers (2021-04-22T01:00:17Z) - Did they answer? Subjective acts and intents in conversational discourse [48.63528550837949]
We present the first discourse dataset with multiple and subjective interpretations of English conversation.
We show disagreements are nuanced and require a deeper understanding of the different contextual factors.
arXiv Detail & Related papers (2021-04-09T16:34:19Z) - My Teacher Thinks The World Is Flat! Interpreting Automatic Essay
Scoring Mechanism [71.34160809068996]
Recent work shows that automated scoring systems are prone to even common-sense adversarial samples.
We utilize recent advances in interpretability to find the extent to which features such as coherence, content and relevance are important for automated scoring mechanisms.
We also find that since the models are not semantically grounded with world-knowledge and common sense, adding false facts such as the world is flat'' actually increases the score instead of decreasing it.
arXiv Detail & Related papers (2020-12-27T06:19:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.