Syntactic Nuclei in Dependency Parsing -- A Multilingual Exploration
- URL: http://arxiv.org/abs/2101.11959v2
- Date: Fri, 29 Jan 2021 18:45:52 GMT
- Title: Syntactic Nuclei in Dependency Parsing -- A Multilingual Exploration
- Authors: Ali Basirat and Joakim Nivre
- Abstract summary: We show how the concept of nucleus can be defined in the framework of Universal Dependencies.
Experiments on 12 languages show that nucleus composition gives small but significant improvements in parsing accuracy.
- Score: 8.25332300240617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Standard models for syntactic dependency parsing take words to be the
elementary units that enter into dependency relations. In this paper, we
investigate whether there are any benefits from enriching these models with the
more abstract notion of nucleus proposed by Tesni\`{e}re. We do this by showing
how the concept of nucleus can be defined in the framework of Universal
Dependencies and how we can use composition functions to make a
transition-based dependency parser aware of this concept. Experiments on 12
languages show that nucleus composition gives small but significant
improvements in parsing accuracy. Further analysis reveals that the improvement
mainly concerns a small number of dependency relations, including nominal
modifiers, relations of coordination, main predicates, and direct objects.
Related papers
- Dynamic Syntax Mapping: A New Approach to Unsupervised Syntax Parsing [0.0]
This study investigates the premise that language models, specifically their attention distributions, can encapsulate syntactic dependencies.
We introduce Dynamic Syntax Mapping (DSM), an innovative approach for the induction of these structures.
Our findings reveal that the use of an increasing array of substitutions notably enhances parsing precision on natural language data.
arXiv Detail & Related papers (2023-12-18T10:34:29Z) - A Pilot Study on Dialogue-Level Dependency Parsing for Chinese [21.698966896156087]
We develop a high-quality human-annotated corpus, which contains 850 dialogues and 199,803 dependencies.
Considering that such tasks suffer from high annotation costs, we investigate zero-shot and few-shot scenarios.
Based on an existing syntactic treebank, we adopt a signal-based method to transform seen syntactic dependencies into unseen ones.
arXiv Detail & Related papers (2023-05-21T12:20:13Z) - Syntactic Substitutability as Unsupervised Dependency Syntax [31.488677474152794]
We model a more general property implicit in the definition of dependency relations, syntactic substitutability.
This property captures the fact that words at either end of a dependency can be substituted with words from the same category.
We show that increasing the number of substitutions used improves parsing accuracy on natural data.
arXiv Detail & Related papers (2022-11-29T09:01:37Z) - Dependency Induction Through the Lens of Visual Perception [81.91502968815746]
We propose an unsupervised grammar induction model that leverages word concreteness and a structural vision-based to jointly learn constituency-structure and dependency-structure grammars.
Our experiments show that the proposed extension outperforms the current state-of-the-art visually grounded models in constituency parsing even with a smaller grammar size.
arXiv Detail & Related papers (2021-09-20T18:40:37Z) - Linguistic dependencies and statistical dependence [76.89273585568084]
We use pretrained language models to estimate probabilities of words in context.
We find that maximum-CPMI trees correspond to linguistic dependencies more often than trees extracted from non-contextual PMI estimate.
arXiv Detail & Related papers (2021-04-18T02:43:37Z) - Prototypical Representation Learning for Relation Extraction [56.501332067073065]
This paper aims to learn predictive, interpretable, and robust relation representations from distantly-labeled data.
We learn prototypes for each relation from contextual information to best explore the intrinsic semantics of relations.
Results on several relation learning tasks show that our model significantly outperforms the previous state-of-the-art relational models.
arXiv Detail & Related papers (2021-03-22T08:11:43Z) - StructFormer: Joint Unsupervised Induction of Dependency and
Constituency Structure from Masked Language Modeling [45.96663013609177]
We introduce a novel model, StructFormer, that can induce dependency and constituency structure at the same time.
We integrate the induced dependency relations into the transformer, in a differentiable manner, through a novel dependency-constrained self-attention mechanism.
Experimental results show that our model can achieve strong results on unsupervised constituency parsing, unsupervised dependency parsing, and masked language modeling.
arXiv Detail & Related papers (2020-12-01T21:54:51Z) - Learning Relation Prototype from Unlabeled Texts for Long-tail Relation
Extraction [84.64435075778988]
We propose a general approach to learn relation prototypes from unlabeled texts.
We learn relation prototypes as an implicit factor between entities.
We conduct experiments on two publicly available datasets: New York Times and Google Distant Supervision.
arXiv Detail & Related papers (2020-11-27T06:21:12Z) - High-order Semantic Role Labeling [86.29371274587146]
This paper introduces a high-order graph structure for the neural semantic role labeling model.
It enables the model to explicitly consider not only the isolated predicate-argument pairs but also the interaction between the predicate-argument pairs.
Experimental results on 7 languages of the CoNLL-2009 benchmark show that the high-order structural learning techniques are beneficial to the strong performing SRL models.
arXiv Detail & Related papers (2020-10-09T15:33:54Z) - A Survey of Syntactic-Semantic Parsing Based on Constituent and
Dependency Structures [14.714725860010724]
We focus on two of the most popular formalizations of parsing: constituent parsing and dependency parsing.
This article briefly reviews the representative models of constituent parsing and dependency parsing, and also dependency parsing with rich semantics.
arXiv Detail & Related papers (2020-06-19T10:21:17Z) - Multiplex Word Embeddings for Selectional Preference Acquisition [70.33531759861111]
We propose a multiplex word embedding model, which can be easily extended according to various relations among words.
Our model can effectively distinguish words with respect to different relations without introducing unnecessary sparseness.
arXiv Detail & Related papers (2020-01-09T04:47:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.