A Semi-Autoregressive Graph Generative Model for Dependency Graph
Parsing
- URL: http://arxiv.org/abs/2306.12018v1
- Date: Wed, 21 Jun 2023 05:07:40 GMT
- Title: A Semi-Autoregressive Graph Generative Model for Dependency Graph
Parsing
- Authors: Ye Ma, Mingming Sun, Ping Li
- Abstract summary: We show that dependency graphs fail to capture the explicit dependencies among nodes and edges.
We design a Semi-Autoregressive Dependency graphs to generate dependency via adding group groups and edge groups autoregressively while pouring out all elements in parallel.
- Score: 24.829141650007273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have witnessed the impressive progress in Neural Dependency
Parsing. According to the different factorization approaches to the graph joint
probabilities, existing parsers can be roughly divided into autoregressive and
non-autoregressive patterns. The former means that the graph should be
factorized into multiple sequentially dependent components, then it can be
built up component by component. And the latter assumes these components to be
independent so that they can be outputted in a one-shot manner. However, when
treating the directed edge as an explicit dependency relationship, we discover
that there is a mixture of independent and interdependent components in the
dependency graph, signifying that both aforementioned models fail to precisely
capture the explicit dependencies among nodes and edges. Based on this
property, we design a Semi-Autoregressive Dependency Parser to generate
dependency graphs via adding node groups and edge groups autoregressively while
pouring out all group elements in parallel. The model gains a trade-off between
non-autoregression and autoregression, which respectively suffer from the lack
of target inter-dependencies and the uncertainty of graph generation orders.
The experiments show the proposed parser outperforms strong baselines on
Enhanced Universal Dependencies of multiple languages, especially achieving
$4\%$ average promotion at graph-level accuracy. Also, the performances of
model variations show the importance of specific parts.
Related papers
- Dependency Graph Parsing as Sequence Labeling [18.079016557290338]
We define a range of unbounded and bounded linearizations that can be used to cast graph parsing as a tagging task.
Experimental results on semantic dependency and enhanced UD parsing show that with a good choice of encoding, sequence-labeling dependency graphs combine high efficiency with accuracies close to the state of the art.
arXiv Detail & Related papers (2024-10-23T15:37:02Z) - Online-to-PAC generalization bounds under graph-mixing dependencies [9.763215134790478]
We propose a framework where dependencies decay with graph distance.
We derive generalization bounds leveraging the online-to-PAC framework.
The resulting high-probability generalization guarantees depend on both the mixing rate and the graph's chromatic number.
arXiv Detail & Related papers (2024-10-11T16:49:01Z) - GrannGAN: Graph annotation generative adversarial networks [72.66289932625742]
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton.
The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases.
In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features.
arXiv Detail & Related papers (2022-12-01T11:49:07Z) - Explanation Graph Generation via Pre-trained Language Models: An
Empirical Study with Contrastive Learning [84.35102534158621]
We study pre-trained language models that generate explanation graphs in an end-to-end manner.
We propose simple yet effective ways of graph perturbations via node and edge edit operations.
Our methods lead to significant improvements in both structural and semantic accuracy of explanation graphs.
arXiv Detail & Related papers (2022-04-11T00:58:27Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - Prototypical Graph Contrastive Learning [141.30842113683775]
We propose a Prototypical Graph Contrastive Learning (PGCL) approach to mitigate the critical sampling bias issue.
Specifically, PGCL models the underlying semantic structure of the graph data via clustering semantically similar graphs into the same group, and simultaneously encourages the clustering consistency for different augmentations of the same graph.
For a query, PGCL further reweights its negative samples based on the distance between their prototypes (cluster centroids) and the query prototype.
arXiv Detail & Related papers (2021-06-17T16:45:31Z) - Graph Ensemble Learning over Multiple Dependency Trees for Aspect-level
Sentiment Classification [37.936820137442254]
We propose a simple yet effective graph ensemble technique, GraphMerge, to make use of the predictions from differ-ent relations.
Instead of assigning one set of model parameters to each dependency tree, we first combine the dependency from different parses before applying GNNs over the resulting graph.
Our experiments on the SemEval 2014 Task 4 and ACL 14 Twitter datasets show that our GraphMerge model not only outperforms models with single dependency tree, but also beats other ensemble mod-els without adding model parameters.
arXiv Detail & Related papers (2021-03-12T22:27:23Z) - Factorizable Graph Convolutional Networks [90.59836684458905]
We introduce a novel graph convolutional network (GCN) that explicitly disentangles intertwined relations encoded in a graph.
FactorGCN takes a simple graph as input, and disentangles it into several factorized graphs.
We evaluate the proposed FactorGCN both qualitatively and quantitatively on the synthetic and real-world datasets.
arXiv Detail & Related papers (2020-10-12T03:01:40Z) - Connecting the Dots: Multivariate Time Series Forecasting with Graph
Neural Networks [91.65637773358347]
We propose a general graph neural network framework designed specifically for multivariate time series data.
Our approach automatically extracts the uni-directed relations among variables through a graph learning module.
Our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets.
arXiv Detail & Related papers (2020-05-24T04:02:18Z) - Residual Correlation in Graph Neural Network Regression [39.54530450932135]
We show that conditional independence assumption severely limits predictive power.
We address this problem with an interpretable and efficient framework.
Our framework achieves substantially higher accuracy than competing baselines.
arXiv Detail & Related papers (2020-02-19T16:32:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.