Supporting Context Monotonicity Abstractions in Neural NLI Models
- URL: http://arxiv.org/abs/2105.08008v1
- Date: Mon, 17 May 2021 16:43:43 GMT
- Title: Supporting Context Monotonicity Abstractions in Neural NLI Models
- Authors: Julia Rozanova, Deborah Ferreira, Mokanarangan Thayaparan, Marco
Valentino, Andr\'e Freitas
- Abstract summary: In certain NLI problems, the entailment label depends only on the context monotonicity and the relation between the substituted concepts.
We introduce a sound and complete simplified monotonicity logic formalism which describes our treatment of contexts as abstract units.
Using the notions in our formalism, we adapt targeted challenge sets to investigate whether an intermediate context monotonicity classification task can aid NLI models' performance.
- Score: 2.624902795082451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Natural language contexts display logical regularities with respect to
substitutions of related concepts: these are captured in a functional
order-theoretic property called monotonicity. For a certain class of NLI
problems where the resulting entailment label depends only on the context
monotonicity and the relation between the substituted concepts, we build on
previous techniques that aim to improve the performance of NLI models for these
problems, as consistent performance across both upward and downward monotone
contexts still seems difficult to attain even for state-of-the-art models. To
this end, we reframe the problem of context monotonicity classification to make
it compatible with transformer-based pre-trained NLI models and add this task
to the training pipeline. Furthermore, we introduce a sound and complete
simplified monotonicity logic formalism which describes our treatment of
contexts as abstract units. Using the notions in our formalism, we adapt
targeted challenge sets to investigate whether an intermediate context
monotonicity classification task can aid NLI models' performance on examples
exhibiting monotonicity reasoning.
Related papers
- MonoKAN: Certified Monotonic Kolmogorov-Arnold Network [48.623199394622546]
In certain applications, model predictions must align with expert-imposed requirements, sometimes exemplified by partial monotonicity constraints.
We introduce a novel ANN architecture called MonoKAN, based on the KAN architecture and achieves certified partial monotonicity while enhancing interpretability.
Our experiments demonstrate that MonoKAN not only enhances interpretability but also improves predictive performance across the majority of benchmarks, outperforming state-of-the-art monotonic approaches.
arXiv Detail & Related papers (2024-09-17T11:10:59Z) - Sequential Representation Learning via Static-Dynamic Conditional Disentanglement [58.19137637859017]
This paper explores self-supervised disentangled representation learning within sequential data, focusing on separating time-independent and time-varying factors in videos.
We propose a new model that breaks the usual independence assumption between those factors by explicitly accounting for the causal relationship between the static/dynamic variables.
Experiments show that the proposed approach outperforms previous complex state-of-the-art techniques in scenarios where the dynamics of a scene are influenced by its content.
arXiv Detail & Related papers (2024-08-10T17:04:39Z) - Learning Disentangled Representations for Natural Language Definitions [0.0]
We argue that recurrent syntactic and semantic regularities in textual data can be used to provide the models with both structural biases and generative factors.
We leverage the semantic structures present in a representative and semantically dense category of sentence types, definitional sentences, for training a Variational Autoencoder to learn disentangled representations.
arXiv Detail & Related papers (2022-09-22T14:31:55Z) - Uni-Perceiver-MoE: Learning Sparse Generalist Models with Conditional
MoEs [63.936622239286685]
We find that interference among different tasks and modalities is the main factor to this phenomenon.
We introduce the Conditional Mixture-of-Experts (Conditional MoEs) to generalist models.
Code and pre-trained generalist models shall be released.
arXiv Detail & Related papers (2022-06-09T17:59:59Z) - Does BERT really agree ? Fine-grained Analysis of Lexical Dependence on
a Syntactic Task [70.29624135819884]
We study the extent to which BERT is able to perform lexically-independent subject-verb number agreement (NA) on targeted syntactic templates.
Our results on nonce sentences suggest that the model generalizes well for simple templates, but fails to perform lexically-independent syntactic generalization when as little as one attractor is present.
arXiv Detail & Related papers (2022-04-14T11:33:15Z) - Can Unsupervised Knowledge Transfer from Social Discussions Help
Argument Mining? [25.43442712037725]
We propose a novel transfer learning strategy to overcome the challenges of unsupervised, argumentative discourse-aware knowledge.
We utilize argumentation-rich social discussions from the ChangeMyView subreddit as a source of unsupervised, argumentative discourse-aware knowledge.
We introduce a novel prompt-based strategy for inter-component relation prediction that compliments our proposed finetuning method.
arXiv Detail & Related papers (2022-03-24T06:48:56Z) - Decomposing Natural Logic Inferences in Neural NLI [9.606462437067984]
We investigate whether neural NLI models capture the crucial semantic features central to natural logic: monotonicity and concept inclusion.
We find that monotonicity information is notably weak in the representations of popular NLI models which achieve high scores on benchmarks.
arXiv Detail & Related papers (2021-12-15T17:35:30Z) - Exploring Transitivity in Neural NLI Models through Veridicality [39.845425535943534]
We focus on the transitivity of inference relations, a fundamental property for systematically drawing inferences.
A model capturing transitivity can compose basic inference patterns and draw new inferences.
We find that current NLI models do not perform consistently well on transitivity inference tasks.
arXiv Detail & Related papers (2021-01-26T11:18:35Z) - SLM: Learning a Discourse Language Representation with Sentence
Unshuffling [53.42814722621715]
We introduce Sentence-level Language Modeling, a new pre-training objective for learning a discourse language representation.
We show that this feature of our model improves the performance of the original BERT by large margins.
arXiv Detail & Related papers (2020-10-30T13:33:41Z) - Neural Natural Language Inference Models Partially Embed Theories of
Lexical Entailment and Negation [14.431925736607043]
We present Monotonicity NLI (MoNLI), a new naturalistic dataset focused on lexical entailment and negation.
In behavioral evaluations, we find that models trained on general-purpose NLI datasets fail systematically on MoNLI examples containing negation.
In structural evaluations, we look for evidence that our top-performing BERT-based model has learned to implement the monotonicity algorithm behind MoNLI.
arXiv Detail & Related papers (2020-04-30T07:53:20Z) - Exact Hard Monotonic Attention for Character-Level Transduction [76.66797368985453]
We show that neural sequence-to-sequence models that use non-monotonic soft attention often outperform popular monotonic models.
We develop a hard attention sequence-to-sequence model that enforces strict monotonicity and learns a latent alignment jointly while learning to transduce.
arXiv Detail & Related papers (2019-05-15T17:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.