ConjNLI: Natural Language Inference Over Conjunctive Sentences
- URL: http://arxiv.org/abs/2010.10418v2
- Date: Wed, 21 Oct 2020 21:49:00 GMT
- Title: ConjNLI: Natural Language Inference Over Conjunctive Sentences
- Authors: Swarnadeep Saha, Yixin Nie, Mohit Bansal
- Abstract summary: Reasoning about conjuncts in conjunctive sentences is important for a deeper understanding of conjunctions.
Existing NLI stress tests do not consider non-boolean usages of conjunctions.
We introduce ConjNLI, a challenge stress-test for natural language inference over conjunctive sentences.
- Score: 89.50542552451368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reasoning about conjuncts in conjunctive sentences is important for a deeper
understanding of conjunctions in English and also how their usages and
semantics differ from conjunctive and disjunctive boolean logic. Existing NLI
stress tests do not consider non-boolean usages of conjunctions and use
templates for testing such model knowledge. Hence, we introduce ConjNLI, a
challenge stress-test for natural language inference over conjunctive
sentences, where the premise differs from the hypothesis by conjuncts removed,
added, or replaced. These sentences contain single and multiple instances of
coordinating conjunctions ("and", "or", "but", "nor") with quantifiers,
negations, and requiring diverse boolean and non-boolean inferences over
conjuncts. We find that large-scale pre-trained language models like RoBERTa do
not understand conjunctive semantics well and resort to shallow heuristics to
make inferences over such sentences. As some initial solutions, we first
present an iterative adversarial fine-tuning method that uses synthetically
created training data based on boolean and non-boolean heuristics. We also
propose a direct model advancement by making RoBERTa aware of predicate
semantic roles. While we observe some performance gains, ConjNLI is still
challenging for current methods, thus encouraging interesting future work for
better understanding of conjunctions. Our data and code are publicly available
at: https://github.com/swarnaHub/ConjNLI
Related papers
- Generating Diverse Negations from Affirmative Sentences [0.999726509256195]
Negations are important in real-world applications as they encode negative polarity in verb phrases, clauses, or other expressions.
We propose NegVerse, a method that tackles the lack of negation datasets by producing a diverse range of negation types.
We provide new rules for masking parts of sentences where negations are most likely to occur, based on syntactic structure.
We also propose a filtering mechanism to identify negation cues and remove degenerate examples, producing a diverse range of meaningful perturbations.
arXiv Detail & Related papers (2024-10-30T21:25:02Z) - Conjunctive categorial grammars and Lambek grammars with additives [49.1574468325115]
A new family of categorial grammars is proposed, defined by enriching basic categorial grammars with a conjunction operation.
It is also shown that categorial grammars with conjunction can be naturally embedded into the Lambek calculus with conjunction and disjunction operations.
arXiv Detail & Related papers (2024-05-26T18:53:56Z) - LINC: A Neurosymbolic Approach for Logical Reasoning by Combining
Language Models with First-Order Logic Provers [60.009969929857704]
Logical reasoning is an important task for artificial intelligence with potential impacts on science, mathematics, and society.
In this work, we reformulating such tasks as modular neurosymbolic programming, which we call LINC.
We observe significant performance gains on FOLIO and a balanced subset of ProofWriter for three different models in nearly all experimental conditions we evaluate.
arXiv Detail & Related papers (2023-10-23T17:58:40Z) - Language Models Can Learn Exceptions to Syntactic Rules [22.810889064523167]
We show that artificial neural networks can generalize productively to novel contexts.
We also show that the relative acceptability of a verb in the active vs. passive voice is positively correlated with the relative frequency of its occurrence in those voices.
arXiv Detail & Related papers (2023-06-09T15:35:11Z) - Relational Sentence Embedding for Flexible Semantic Matching [86.21393054423355]
We present Sentence Embedding (RSE), a new paradigm to discover further the potential of sentence embeddings.
RSE is effective and flexible in modeling sentence relations and outperforms a series of state-of-the-art embedding methods.
arXiv Detail & Related papers (2022-12-17T05:25:17Z) - Subject Verb Agreement Error Patterns in Meaningless Sentences: Humans
vs. BERT [64.40111510974957]
We test whether meaning interferes with subject-verb number agreement in English.
We generate semantically well-formed and nonsensical items.
We find that BERT and humans are both sensitive to our semantic manipulation.
arXiv Detail & Related papers (2022-09-21T17:57:23Z) - Provable Limitations of Acquiring Meaning from Ungrounded Form: What
will Future Language Models Understand? [87.20342701232869]
We investigate the abilities of ungrounded systems to acquire meaning.
We study whether assertions enable a system to emulate representations preserving semantic relations like equivalence.
We find that assertions enable semantic emulation if all expressions in the language are referentially transparent.
However, if the language uses non-transparent patterns like variable binding, we show that emulation can become an uncomputable problem.
arXiv Detail & Related papers (2021-04-22T01:00:17Z) - Are Natural Language Inference Models IMPPRESsive? Learning IMPlicature
and PRESupposition [17.642255516887968]
Natural language inference (NLI) is an increasingly important task for natural language understanding.
The ability of NLI models to make pragmatic inferences remains understudied.
We evaluate whether BERT, InferSent, and BOW NLI models trained on MultiNLI learn to make pragmatic inferences.
arXiv Detail & Related papers (2020-04-07T01:20:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.