Improving negation detection with negation-focused pre-training
- URL: http://arxiv.org/abs/2205.04012v1
- Date: Mon, 9 May 2022 02:41:11 GMT
- Title: Improving negation detection with negation-focused pre-training
- Authors: Thinh Hung Truong, Timothy Baldwin, Trevor Cohn, Karin Verspoor
- Abstract summary: Negation is a common linguistic feature that is crucial in many language understanding tasks.
Recent work has shown that state-of-the-art NLP models underperform on samples containing negation.
We propose a new negation-focused pre-training strategy, involving targeted data augmentation and negation masking.
- Score: 58.32362243122714
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Negation is a common linguistic feature that is crucial in many language
understanding tasks, yet it remains a hard problem due to diversity in its
expression in different types of text. Recent work has shown that
state-of-the-art NLP models underperform on samples containing negation in
various tasks, and that negation detection models do not transfer well across
domains. We propose a new negation-focused pre-training strategy, involving
targeted data augmentation and negation masking, to better incorporate negation
information into language models. Extensive experiments on common benchmarks
show that our proposed approach improves negation detection performance and
generalizability over the strong baseline NegBERT (Khandewal and Sawant, 2020).
Related papers
- Vision-Language Models Do Not Understand Negation [50.27667000027403]
NegBench is a benchmark designed to evaluate negation understanding across 18 task variations and 79k examples spanning image, video, and medical datasets.
We show that this approach can result in a 10% increase in recall on negated queries and a 40% boost in accuracy on multiple-choice questions with negated captions.
arXiv Detail & Related papers (2025-01-16T09:55:42Z) - Revisiting subword tokenization: A case study on affixal negation in large language models [57.75279238091522]
We measure the impact of affixal negation on modern English large language models (LLMs)
We conduct experiments using LLMs with different subword tokenization methods.
We show that models can, on the whole, reliably recognize the meaning of affixal negation.
arXiv Detail & Related papers (2024-04-03T03:14:27Z) - Language models are not naysayers: An analysis of language models on
negation benchmarks [58.32362243122714]
We evaluate the ability of current-generation auto-regressive language models to handle negation.
We show that LLMs have several limitations including insensitivity to the presence of negation, an inability to capture the lexical semantics of negation, and a failure to reason under negation.
arXiv Detail & Related papers (2023-06-14T01:16:37Z) - CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about
Negation [21.56001677478673]
We present the first English reading comprehension dataset which requires reasoning about the implications of negated statements in paragraphs.
CONDAQA features 14,182 question-answer pairs with over 200 unique negation cues.
The best performing model on CONDAQA (UnifiedQA-v2-3b) achieves only 42% on our consistency metric, well below human performance which is 81%.
arXiv Detail & Related papers (2022-11-01T06:10:26Z) - Not another Negation Benchmark: The NaN-NLI Test Suite for Sub-clausal
Negation [59.307534363825816]
Negation is poorly captured by current language models, although the extent of this problem is not widely understood.
We introduce a natural language inference (NLI) test suite to enable probing the capabilities of NLP methods.
arXiv Detail & Related papers (2022-10-06T23:39:01Z) - Understanding by Understanding Not: Modeling Negation in Language Models [81.21351681735973]
Negation is a core construction in natural language.
We propose to augment the language modeling objective with an unlikelihood objective that is based on negated generic sentences.
We reduce the mean top1 error rate to 4% on the negated LAMA dataset.
arXiv Detail & Related papers (2021-05-07T21:58:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.