Conversational Negation using Worldly Context in Compositional
Distributional Semantics
- URL: http://arxiv.org/abs/2105.05748v1
- Date: Wed, 12 May 2021 16:04:36 GMT
- Title: Conversational Negation using Worldly Context in Compositional
Distributional Semantics
- Authors: Benjamin Rodatz, Razin A. Shaikh and Lia Yeh
- Abstract summary: Given a word, our framework can create its negation similar to how humans perceive negation.
We propose and motivate a new logical negation using matrix inverse.
We conclude that the combination of subtraction negation and phaser in the basis of the negated word yields the highest Pearson correlation of 0.635 with human ratings.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a framework to model an operational conversational negation by
applying worldly context (prior knowledge) to logical negation in compositional
distributional semantics. Given a word, our framework can create its negation
that is similar to how humans perceive negation. The framework corrects logical
negation to weight meanings closer in the entailment hierarchy more than
meanings further apart. The proposed framework is flexible to accommodate
different choices of logical negations, compositions, and worldly context
generation. In particular, we propose and motivate a new logical negation using
matrix inverse.
We validate the sensibility of our conversational negation framework by
performing experiments, leveraging density matrices to encode graded entailment
information. We conclude that the combination of subtraction negation and
phaser in the basis of the negated word yields the highest Pearson correlation
of 0.635 with human ratings.
Related papers
- Generating Diverse Negations from Affirmative Sentences [0.999726509256195]
Negations are important in real-world applications as they encode negative polarity in verb phrases, clauses, or other expressions.
We propose NegVerse, a method that tackles the lack of negation datasets by producing a diverse range of negation types.
We provide new rules for masking parts of sentences where negations are most likely to occur, based on syntactic structure.
We also propose a filtering mechanism to identify negation cues and remove degenerate examples, producing a diverse range of meaningful perturbations.
arXiv Detail & Related papers (2024-10-30T21:25:02Z) - SHINE: Saliency-aware HIerarchical NEgative Ranking for Compositional Temporal Grounding [52.98133831401225]
Temporal grounding, also known as video moment retrieval, aims at locating video segments corresponding to a given query sentence.
We propose a large language model-driven method for negative query construction, utilizing GPT-3.5-Turbo.
We introduce a coarse-to-fine saliency ranking strategy, which encourages the model to learn the multi-granularity semantic relationships between videos and hierarchical negative queries.
arXiv Detail & Related papers (2024-07-06T16:08:17Z) - Paraphrasing in Affirmative Terms Improves Negation Understanding [9.818585902859363]
Negation is a common linguistic phenomenon.
We show improvements with CondaQA, a large corpus requiring reasoning with negation, and five natural language understanding tasks.
arXiv Detail & Related papers (2024-06-11T17:30:03Z) - Revisiting subword tokenization: A case study on affixal negation in large language models [57.75279238091522]
We measure the impact of affixal negation on modern English large language models (LLMs)
We conduct experiments using LLMs with different subword tokenization methods.
We show that models can, on the whole, reliably recognize the meaning of affixal negation.
arXiv Detail & Related papers (2024-04-03T03:14:27Z) - CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about
Negation [21.56001677478673]
We present the first English reading comprehension dataset which requires reasoning about the implications of negated statements in paragraphs.
CONDAQA features 14,182 question-answer pairs with over 200 unique negation cues.
The best performing model on CONDAQA (UnifiedQA-v2-3b) achieves only 42% on our consistency metric, well below human performance which is 81%.
arXiv Detail & Related papers (2022-11-01T06:10:26Z) - Not another Negation Benchmark: The NaN-NLI Test Suite for Sub-clausal
Negation [59.307534363825816]
Negation is poorly captured by current language models, although the extent of this problem is not widely understood.
We introduce a natural language inference (NLI) test suite to enable probing the capabilities of NLP methods.
arXiv Detail & Related papers (2022-10-06T23:39:01Z) - Improving negation detection with negation-focused pre-training [58.32362243122714]
Negation is a common linguistic feature that is crucial in many language understanding tasks.
Recent work has shown that state-of-the-art NLP models underperform on samples containing negation.
We propose a new negation-focused pre-training strategy, involving targeted data augmentation and negation masking.
arXiv Detail & Related papers (2022-05-09T02:41:11Z) - Composing Conversational Negation [0.0]
We compose the negations of single words to capture the negation of sentences.
We also describe how to model the negation of words whose meanings evolve in the text.
arXiv Detail & Related papers (2021-07-14T16:24:41Z) - Provable Limitations of Acquiring Meaning from Ungrounded Form: What
will Future Language Models Understand? [87.20342701232869]
We investigate the abilities of ungrounded systems to acquire meaning.
We study whether assertions enable a system to emulate representations preserving semantic relations like equivalence.
We find that assertions enable semantic emulation if all expressions in the language are referentially transparent.
However, if the language uses non-transparent patterns like variable binding, we show that emulation can become an uncomputable problem.
arXiv Detail & Related papers (2021-04-22T01:00:17Z) - Negation in Cognitive Reasoning [0.5801044612920815]
Negation is an operation in formal logic and in natural language.
One task of cognitive reasoning is answering questions given by sentences in natural language.
arXiv Detail & Related papers (2020-12-23T13:22:53Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.