Combining Event Semantics and Degree Semantics for Natural Language
Inference
- URL: http://arxiv.org/abs/2011.00961v1
- Date: Mon, 2 Nov 2020 13:27:21 GMT
- Title: Combining Event Semantics and Degree Semantics for Natural Language
Inference
- Authors: Izumi Haruta, Koji Mineshima, and Daisuke Bekki
- Abstract summary: We implement a logic-based NLI system that combines event semantics and degree semantics and their interaction with lexical knowledge.
We evaluate the system on various NLI datasets containing linguistically challenging problems.
- Score: 16.536018920603176
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In formal semantics, there are two well-developed semantic frameworks: event
semantics, which treats verbs and adverbial modifiers using the notion of
event, and degree semantics, which analyzes adjectives and comparatives using
the notion of degree. However, it is not obvious whether these frameworks can
be combined to handle cases in which the phenomena in question are interacting
with each other. Here, we study this issue by focusing on natural language
inference (NLI). We implement a logic-based NLI system that combines event
semantics and degree semantics and their interaction with lexical knowledge. We
evaluate the system on various NLI datasets containing linguistically
challenging problems. The results show that the system achieves high accuracies
on these datasets in comparison with previous logic-based systems and
deep-learning-based systems. This suggests that the two semantic frameworks can
be combined consistently to handle various combinations of linguistic phenomena
without compromising the advantage of either framework.
Related papers
- Contextualized word senses: from attention to compositionality [0.10878040851637999]
We propose a transparent, interpretable, and linguistically motivated strategy for encoding the contextual sense of words.
Particular attention is given to dependency relations and semantic notions such as selection preferences and paradigmatic classes.
arXiv Detail & Related papers (2023-12-01T16:04:00Z) - Syntax and Semantics Meet in the "Middle": Probing the Syntax-Semantics
Interface of LMs Through Agentivity [68.8204255655161]
We present the semantic notion of agentivity as a case study for probing such interactions.
This suggests LMs may potentially serve as more useful tools for linguistic annotation, theory testing, and discovery.
arXiv Detail & Related papers (2023-05-29T16:24:01Z) - A Comprehensive Empirical Evaluation of Existing Word Embedding
Approaches [5.065947993017158]
We present the characteristics of existing word embedding approaches and analyze them with regard to many classification tasks.
Traditional approaches mostly use matrix factorization to produce word representations, and they are not able to capture the semantic and syntactic regularities of the language very well.
On the other hand, Neural-network-based approaches can capture sophisticated regularities of the language and preserve the word relationships in the generated word representations.
arXiv Detail & Related papers (2023-03-13T15:34:19Z) - Embracing Ambiguity: Improving Similarity-oriented Tasks with Contextual
Synonym Knowledge [30.010315144903885]
Contextual synonym knowledge is crucial for similarity-oriented tasks.
Most Pre-trained Language Models (PLMs) lack synonym knowledge due to inherent limitations of their pre-training objectives.
We propose PICSO, a flexible framework that supports the injection of contextual synonym knowledge from multiple domains into PLMs.
arXiv Detail & Related papers (2022-11-20T15:25:19Z) - Transparency Helps Reveal When Language Models Learn Meaning [71.96920839263457]
Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations, both autoregressive and masked language models learn to emulate semantic relations between expressions.
Turning to natural language, our experiments with a specific phenomenon -- referential opacity -- add to the growing body of evidence that current language models do not well-represent natural language semantics.
arXiv Detail & Related papers (2022-10-14T02:35:19Z) - The Whole Truth and Nothing But the Truth: Faithful and Controllable
Dialogue Response Generation with Dataflow Transduction and Constrained
Decoding [65.34601470417967]
We describe a hybrid architecture for dialogue response generation that combines the strengths of neural language modeling and rule-based generation.
Our experiments show that this system outperforms both rule-based and learned approaches in human evaluations of fluency, relevance, and truthfulness.
arXiv Detail & Related papers (2022-09-16T09:00:49Z) - Multi-sense embeddings through a word sense disambiguation process [2.2344764434954256]
Most Suitable Sense.
(MSSA) disambiguates and annotates each word by its specific sense, considering the semantic effects of its context.
We test our approach on six different benchmarks for the word similarity task, showing that our approach can produce state-of-the-art results.
arXiv Detail & Related papers (2021-01-21T16:22:34Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - Logical Inferences with Comparatives and Generalized Quantifiers [18.58482811176484]
A logical inference system for comparatives has not been sufficiently developed for use in the Natural Language Inference task.
We present a compositional semantics that maps various comparative constructions in English to semantic representations via Category Grammar (CCG)
We show that the system outperforms previous logic-based systems as well as recent deep learning-based models.
arXiv Detail & Related papers (2020-05-16T11:11:48Z) - Semantics-Aware Inferential Network for Natural Language Understanding [79.70497178043368]
We propose a Semantics-Aware Inferential Network (SAIN) to meet such a motivation.
Taking explicit contextualized semantics as a complementary input, the inferential module of SAIN enables a series of reasoning steps over semantic clues.
Our model achieves significant improvement on 11 tasks including machine reading comprehension and natural language inference.
arXiv Detail & Related papers (2020-04-28T07:24:43Z) - Interpretability Analysis for Named Entity Recognition to Understand
System Predictions and How They Can Improve [49.878051587667244]
We examine the performance of several variants of LSTM-CRF architectures for named entity recognition.
We find that context representations do contribute to system performance, but that the main factor driving high performance is learning the name tokens themselves.
We enlist human annotators to evaluate the feasibility of inferring entity types from the context alone and find that, while people are not able to infer the entity type either for the majority of the errors made by the context-only system, there is some room for improvement.
arXiv Detail & Related papers (2020-04-09T14:37:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.