Unsupervised Mapping of Arguments of Deverbal Nouns to Their
Corresponding Verbal Labels
- URL: http://arxiv.org/abs/2306.13922v1
- Date: Sat, 24 Jun 2023 10:07:01 GMT
- Title: Unsupervised Mapping of Arguments of Deverbal Nouns to Their
Corresponding Verbal Labels
- Authors: Aviv Weinstein and Yoav Goldberg
- Abstract summary: Deverbal nouns are verbs commonly used in written English texts to describe events or actions, as well as their arguments.
The solutions that do exist for handling arguments of nominalized constructions are based on semantic annotation.
We propose to adopt a more syntactic approach, which maps the arguments of deverbal nouns to the corresponding verbal construction.
- Score: 52.940886615390106
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deverbal nouns are nominal forms of verbs commonly used in written English
texts to describe events or actions, as well as their arguments. However, many
NLP systems, and in particular pattern-based ones, neglect to handle such
nominalized constructions. The solutions that do exist for handling arguments
of nominalized constructions are based on semantic annotation and require
semantic ontologies, making their applications restricted to a small set of
nouns. We propose to adopt instead a more syntactic approach, which maps the
arguments of deverbal nouns to the universal-dependency relations of the
corresponding verbal construction. We present an unsupervised mechanism --
based on contextualized word representations -- which allows to enrich
universal-dependency trees with dependency arcs denoting arguments of deverbal
nouns, using the same labels as the corresponding verbal cases. By sharing the
same label set as in the verbal case, patterns that were developed for verbs
can be applied without modification but with high accuracy also to the nominal
constructions.
Related papers
- A Compositional Typed Semantics for Universal Dependencies [26.65442947858347]
We introduce UD Type Calculus, a compositional, principled, and language-independent system of semantic types and logical forms for lexical items.
We explain the essential features of UD Type Calculus, which all involve giving dependency relations denotations just like those of words.
We present results on a large existing corpus of sentences and their logical forms, showing that UD-TC can produce meanings comparable with our baseline.
arXiv Detail & Related papers (2024-03-02T11:58:24Z) - Dynamic Syntax Mapping: A New Approach to Unsupervised Syntax Parsing [0.0]
This study investigates the premise that language models, specifically their attention distributions, can encapsulate syntactic dependencies.
We introduce Dynamic Syntax Mapping (DSM), an innovative approach for the induction of these structures.
Our findings reveal that the use of an increasing array of substitutions notably enhances parsing precision on natural language data.
arXiv Detail & Related papers (2023-12-18T10:34:29Z) - Interpretable Word Sense Representations via Definition Generation: The
Case of Semantic Change Analysis [3.515619810213763]
We propose using automatically generated natural language definitions of contextualised word usages as interpretable word and word sense representations.
We demonstrate how the resulting sense labels can make existing approaches to semantic change analysis more interpretable.
arXiv Detail & Related papers (2023-05-19T20:36:21Z) - Semantic Role Labeling Meets Definition Modeling: Using Natural Language
to Describe Predicate-Argument Structures [104.32063681736349]
We present an approach to describe predicate-argument structures using natural language definitions instead of discrete labels.
Our experiments and analyses on PropBank-style and FrameNet-style, dependency-based and span-based SRL also demonstrate that a flexible model with an interpretable output does not necessarily come at the expense of performance.
arXiv Detail & Related papers (2022-12-02T11:19:16Z) - Lost in Context? On the Sense-wise Variance of Contextualized Word
Embeddings [11.475144702935568]
We quantify how much the contextualized embeddings of each word sense vary across contexts in typical pre-trained models.
We find that word representations are position-biased, where the first words in different contexts tend to be more similar.
arXiv Detail & Related papers (2022-08-20T12:27:25Z) - More Than Words: Collocation Tokenization for Latent Dirichlet
Allocation Models [71.42030830910227]
We propose a new metric for measuring the clustering quality in settings where the models differ.
We show that topics trained with merged tokens result in topic keys that are clearer, more coherent, and more effective at distinguishing topics than those unmerged models.
arXiv Detail & Related papers (2021-08-24T14:08:19Z) - Verb Sense Clustering using Contextualized Word Representations for
Semantic Frame Induction [9.93359829907774]
Contextualized word representations have proven useful for various natural language processing tasks.
In this paper, we focus on verbs that evoke different frames depending on the context.
We investigate how well contextualized word representations can recognize the difference of frames that the same verb evokes.
arXiv Detail & Related papers (2021-05-27T21:53:40Z) - Compositional Generalization via Semantic Tagging [81.24269148865555]
We propose a new decoding framework that preserves the expressivity and generality of sequence-to-sequence models.
We show that the proposed approach consistently improves compositional generalization across model architectures, domains, and semantic formalisms.
arXiv Detail & Related papers (2020-10-22T15:55:15Z) - Unsupervised Distillation of Syntactic Information from Contextualized
Word Representations [62.230491683411536]
We tackle the task of unsupervised disentanglement between semantics and structure in neural language representations.
To this end, we automatically generate groups of sentences which are structurally similar but semantically different.
We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics.
arXiv Detail & Related papers (2020-10-11T15:13:18Z) - Investigating Cross-Linguistic Adjective Ordering Tendencies with a
Latent-Variable Model [66.84264870118723]
We present the first purely corpus-driven model of multi-lingual adjective ordering in the form of a latent-variable model.
We provide strong converging evidence for the existence of universal, cross-linguistic, hierarchical adjective ordering tendencies.
arXiv Detail & Related papers (2020-10-09T18:27:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.