Dependency Graph Parsing as Sequence Labeling
- URL: http://arxiv.org/abs/2410.17972v1
- Date: Wed, 23 Oct 2024 15:37:02 GMT
- Title: Dependency Graph Parsing as Sequence Labeling
- Authors: Ana Ezquerro, David Vilares, Carlos Gómez-Rodríguez,
- Abstract summary: We define a range of unbounded and bounded linearizations that can be used to cast graph parsing as a tagging task.
Experimental results on semantic dependency and enhanced UD parsing show that with a good choice of encoding, sequence-labeling dependency graphs combine high efficiency with accuracies close to the state of the art.
- Score: 18.079016557290338
- License:
- Abstract: Various linearizations have been proposed to cast syntactic dependency parsing as sequence labeling. However, these approaches do not support more complex graph-based representations, such as semantic dependencies or enhanced universal dependencies, as they cannot handle reentrancy or cycles. By extending them, we define a range of unbounded and bounded linearizations that can be used to cast graph parsing as a tagging task, enlarging the toolbox of problems that can be solved under this paradigm. Experimental results on semantic dependency and enhanced UD parsing show that with a good choice of encoding, sequence-labeling dependency graph parsers combine high efficiency with accuracies close to the state of the art, in spite of their simplicity.
Related papers
- A Semi-Autoregressive Graph Generative Model for Dependency Graph
Parsing [24.829141650007273]
We show that dependency graphs fail to capture the explicit dependencies among nodes and edges.
We design a Semi-Autoregressive Dependency graphs to generate dependency via adding group groups and edge groups autoregressively while pouring out all elements in parallel.
arXiv Detail & Related papers (2023-06-21T05:07:40Z) - Visual Dependency Transformers: Dependency Tree Emerges from Reversed
Attention [106.67741967871969]
We propose Visual Dependency Transformers (DependencyViT) that can induce visual dependencies without any labels.
We formulate it as a dependency graph where a child token in reversed attention is trained to attend to its parent tokens and send information.
DependencyViT works well on both self- and weakly-supervised pretraining paradigms on ImageNet.
arXiv Detail & Related papers (2023-04-06T17:59:26Z) - Self-Sufficient Framework for Continuous Sign Language Recognition [75.60327502570242]
The goal of this work is to develop self-sufficient framework for Continuous Sign Language Recognition.
These include the need for complex multi-scale features such as hands, face, and mouth for understanding, and absence of frame-level annotations.
We propose Divide and Focus Convolution (DFConv) which extracts both manual and non-manual features without the need for additional networks or annotations.
DPLR propagates non-spiky frame-level pseudo-labels by combining the ground truth gloss sequence labels with the predicted sequence.
arXiv Detail & Related papers (2023-03-21T11:42:57Z) - Graph Adaptive Semantic Transfer for Cross-domain Sentiment
Classification [68.06496970320595]
Cross-domain sentiment classification (CDSC) aims to use the transferable semantics learned from the source domain to predict the sentiment of reviews in the unlabeled target domain.
We present Graph Adaptive Semantic Transfer (GAST) model, an adaptive syntactic graph embedding method that is able to learn domain-invariant semantics from both word sequences and syntactic graphs.
arXiv Detail & Related papers (2022-05-18T07:47:01Z) - Uncertain Label Correction via Auxiliary Action Unit Graphs for Facial
Expression Recognition [46.99756911719854]
We achieve uncertain label correction of facial expressions using auxiliary action unit (AU) graphs, called ULC-AG.
Experiments show that our ULC-AG achieves 89.31% and 61.57% accuracy on RAF-DB and AffectNet datasets, respectively.
arXiv Detail & Related papers (2022-04-23T11:09:43Z) - Fine-grained Entity Typing via Label Reasoning [41.05579329042479]
We propose emphLabel Reasoning Network(LRN), which sequentially reasons fine-grained entity labels.
Experiments show that LRN achieves the state-of-the-art performance on standard ultra fine-grained entity typing benchmarks.
arXiv Detail & Related papers (2021-09-13T07:08:47Z) - Learning compositional structures for semantic graph parsing [81.41592892863979]
We show how AM dependency parsing can be trained directly on a neural latent-variable model.
Our model picks up on several linguistic phenomena on its own and achieves comparable accuracy to supervised training.
arXiv Detail & Related papers (2021-06-08T14:20:07Z) - GRACE: Gradient Harmonized and Cascaded Labeling for Aspect-based
Sentiment Analysis [90.43089622630258]
We propose a GRadient hArmonized and CascadEd labeling model (GRACE) to solve these problems.
The proposed model achieves consistency improvement on multiple benchmark datasets and generates state-of-the-art results.
arXiv Detail & Related papers (2020-09-22T13:55:34Z) - Learn to Propagate Reliably on Noisy Affinity Graphs [69.97364913330989]
Recent works have shown that exploiting unlabeled data through label propagation can substantially reduce the labeling cost.
How to propagate labels reliably, especially on a dataset with unknown outliers, remains an open question.
We propose a new framework that allows labels to be propagated reliably on large-scale real-world data.
arXiv Detail & Related papers (2020-07-17T07:55:59Z) - Transition-based Semantic Dependency Parsing with Pointer Networks [0.34376560669160383]
We propose a transition system that can straightforwardly produce labelled directed acyclic graphs and perform semantic dependency parsing.
We enhance our approach with deep contextualized word embeddings extracted from BERT.
The resulting system not only outperforms all existing transition-based models, but also matches the best fully-supervised accuracy to date on SemEval 2015 18 English datasets.
arXiv Detail & Related papers (2020-05-27T13:18:27Z) - Factorized Graph Representations for Semi-Supervised Learning from
Sparse Data [8.875598257768846]
We show that a method called distant compatibility estimation works even on extremely sparsely labeled graphs.
Our estimator is by orders of magnitude faster than an alternative approach and that the end-to-end classification accuracy is comparable to using gold standard compatibilities.
arXiv Detail & Related papers (2020-03-05T18:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.