Superlatives in Context: Modeling the Implicit Semantics of Superlatives
- URL: http://arxiv.org/abs/2405.20967v2
- Date: Thu, 17 Oct 2024 17:19:32 GMT
- Title: Superlatives in Context: Modeling the Implicit Semantics of Superlatives
- Authors: Valentina Pyatkin, Bonnie Webber, Ido Dagan, Reut Tsarfaty,
- Abstract summary: Superlatives are used to single out elements with a maximal/minimal property.
Superlatives provide an ideal phenomenon for studying implicit phenomena and discourse restrictions.
We show that the fine-grained semantics of superlatives in context can be challenging for contemporary models.
- Score: 31.063753498947346
- License:
- Abstract: Superlatives are used to single out elements with a maximal/minimal property. Semantically, superlatives perform a set comparison: something (or some things) has the min/max property out of a set. As such, superlatives provide an ideal phenomenon for studying implicit phenomena and discourse restrictions. While this comparison set is often not explicitly defined, its (implicit) restrictions can be inferred from the discourse context the expression appears in. In this work we provide an extensive computational study on the semantics of superlatives. We propose a unified account of superlative semantics which allows us to derive a broad-coverage annotation schema. Using this unified schema we annotated a multi-domain dataset of superlatives and their semantic interpretations. We specifically focus on interpreting implicit or ambiguous superlative expressions, by analyzing how the discourse context restricts the set of interpretations. In a set of experiments we then analyze how well models perform at variations of predicting superlative semantics, with and without context. We show that the fine-grained semantics of superlatives in context can be challenging for contemporary models, including GPT-4.
Related papers
- Explicating the Implicit: Argument Detection Beyond Sentence Boundaries [24.728886446551577]
We reformulate the problem of argument detection through textual entailment to capture semantic relations across sentence boundaries.
Our method does not require direct supervision, which is generally absent due to dataset scarcity.
We demonstrate it on a recent document-level benchmark, outperforming some supervised methods and contemporary language models.
arXiv Detail & Related papers (2024-08-08T06:18:24Z) - Syntax and Semantics Meet in the "Middle": Probing the Syntax-Semantics
Interface of LMs Through Agentivity [68.8204255655161]
We present the semantic notion of agentivity as a case study for probing such interactions.
This suggests LMs may potentially serve as more useful tools for linguistic annotation, theory testing, and discovery.
arXiv Detail & Related papers (2023-05-29T16:24:01Z) - Simple Linguistic Inferences of Large Language Models (LLMs): Blind Spots and Blinds [59.71218039095155]
We evaluate language understanding capacities on simple inference tasks that most humans find trivial.
We target (i) grammatically-specified entailments, (ii) premises with evidential adverbs of uncertainty, and (iii) monotonicity entailments.
The models exhibit moderate to low performance on these evaluation sets.
arXiv Detail & Related papers (2023-05-24T06:41:09Z) - Semantic-aware Contrastive Learning for More Accurate Semantic Parsing [32.74456368167872]
We propose a semantic-aware contrastive learning algorithm, which can learn to distinguish fine-grained meaning representations.
Experiments on two standard datasets show that our approach achieves significant improvements over MLE baselines.
arXiv Detail & Related papers (2023-01-19T07:04:32Z) - SDA: Simple Discrete Augmentation for Contrastive Sentence Representation Learning [14.028140579482688]
SimCSE surprisingly dominates discrete augmentations such as cropping, word deletion, and synonym replacement as reported.
We develop three simple yet effective discrete sentence augmentation schemes: punctuation insertion, modal verbs, and double negation.
Results support the superiority of the proposed methods consistently.
arXiv Detail & Related papers (2022-10-08T08:07:47Z) - UCTopic: Unsupervised Contrastive Learning for Phrase Representations
and Topic Mining [27.808028645942827]
UCTopic is a novel unsupervised contrastive learning framework for context-aware phrase representations and topic mining.
It is pretrained in a large scale to distinguish if the contexts of two phrase mentions have the same semantics.
It outperforms the state-of-the-art phrase representation model by 38.2% NMI in average on four entity cluster-ing tasks.
arXiv Detail & Related papers (2022-02-27T22:43:06Z) - Clustering and Network Analysis for the Embedding Spaces of Sentences
and Sub-Sentences [69.3939291118954]
This paper reports research on a set of comprehensive clustering and network analyses targeting sentence and sub-sentence embedding spaces.
Results show that one method generates the most clusterable embeddings.
In general, the embeddings of span sub-sentences have better clustering properties than the original sentences.
arXiv Detail & Related papers (2021-10-02T00:47:35Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - Recursive Rules with Aggregation: A Simple Unified Semantics [0.6662800021628273]
This paper describes a unified semantics for recursion with aggregation.
We present a formal definition of the semantics, prove important properties of the semantics, and compare with prior semantics.
We show that our semantics is simple and matches the desired results in all cases.
arXiv Detail & Related papers (2020-07-26T04:42:44Z) - How Far are We from Effective Context Modeling? An Exploratory Study on
Semantic Parsing in Context [59.13515950353125]
We present a grammar-based decoding semantic parsing and adapt typical context modeling methods on top of it.
We evaluate 13 context modeling methods on two large cross-domain datasets, and our best model achieves state-of-the-art performances.
arXiv Detail & Related papers (2020-02-03T11:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.