Tree Decomposition Attention for AMR-to-Text Generation
- URL: http://arxiv.org/abs/2108.12300v1
- Date: Fri, 27 Aug 2021 14:24:25 GMT
- Title: Tree Decomposition Attention for AMR-to-Text Generation
- Authors: Lisa Jin, Daniel Gildea
- Abstract summary: We use a graph's tree decomposition to constrain self-attention in a graph.
We apply dynamic programming to derive a forest of tree decompositions, choosing the most structurally similar tree to the AMR.
Our system outperforms a self-attentive baseline by 1.6 BLEU and 1.8 chrF++.
- Score: 12.342043849587613
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text generation from AMR requires mapping a semantic graph to a string that
it annotates. Transformer-based graph encoders, however, poorly capture vertex
dependencies that may benefit sequence prediction. To impose order on an
encoder, we locally constrain vertex self-attention using a graph's tree
decomposition. Instead of forming a full query-key bipartite graph, we restrict
attention to vertices in parent, subtree, and same-depth bags of a vertex. This
hierarchical context lends both sparsity and structure to vertex state updates.
We apply dynamic programming to derive a forest of tree decompositions,
choosing the most structurally similar tree to the AMR. Our system outperforms
a self-attentive baseline by 1.6 BLEU and 1.8 chrF++.
Related papers
- Refinement Module based on Parse Graph of Feature Map for Human Pose Estimation [31.603231536312688]
Parse graphs of the human body can be obtained to help humans complete the human Pose Estimation better.
We design a Refinement Module based on the Parse Graph of feature map (RMPG), which includes two stages: top-down decomposition and bottom-up combination.
Our network achieves excellent results on multiple mainstream human pose datasets.
arXiv Detail & Related papers (2025-01-19T15:05:15Z) - Graph Generation with $K^2$-trees [13.281380233427287]
We introduce a novel graph generation method leveraging $K2$-tree representation.
We also present a sequential $K2$-treerepresentation that incorporates pruning, flattening, and tokenization processes.
We extensively evaluate our algorithm on four general and two molecular graph datasets to confirm its superiority for graph generation.
arXiv Detail & Related papers (2023-05-30T15:36:37Z) - Hierarchical clustering with dot products recovers hidden tree structure [53.68551192799585]
In this paper we offer a new perspective on the well established agglomerative clustering algorithm, focusing on recovery of hierarchical structure.
We recommend a simple variant of the standard algorithm, in which clusters are merged by maximum average dot product and not, for example, by minimum distance or within-cluster variance.
We demonstrate that the tree output by this algorithm provides a bona fide estimate of generative hierarchical structure in data, under a generic probabilistic graphical model.
arXiv Detail & Related papers (2023-05-24T11:05:12Z) - Structural Optimization Makes Graph Classification Simpler and Better [5.770986723520119]
We investigate the feasibility of improving graph classification performance while simplifying the model learning process.
Inspired by progress in structural information assessment, we optimize the given data sample from graphs to encoding trees.
We present an implementation of the scheme in a tree kernel and a convolutional network to perform graph classification.
arXiv Detail & Related papers (2021-09-05T08:54:38Z) - Latent Tree Decomposition Parsers for AMR-to-Text Generation [12.342043849587613]
By clustering edges into a hierarchy, a tree decomposition summarizes graph structure.
Our model encodes a forest of tree decompositions and extracts an expected tree.
It surpasses a convolutional baseline for molecular property prediction by 1.92% ROC-AUC.
arXiv Detail & Related papers (2021-08-27T14:30:35Z) - TD-GEN: Graph Generation With Tree Decomposition [31.751200416677225]
TD-GEN is a graph generation framework based on tree decomposition.
Tree nodes are supernodes, each representing a cluster of nodes in the graph.
arXiv Detail & Related papers (2021-06-20T08:57:43Z) - Visualizing hierarchies in scRNA-seq data using a density tree-biased
autoencoder [50.591267188664666]
We propose an approach for identifying a meaningful tree structure from high-dimensional scRNA-seq data.
We then introduce DTAE, a tree-biased autoencoder that emphasizes the tree structure of the data in low dimensional space.
arXiv Detail & Related papers (2021-02-11T08:48:48Z) - TreeRNN: Topology-Preserving Deep GraphEmbedding and Learning [24.04035265351755]
We study the methods to transfer the graphs into trees so that explicit orders are learned to direct the feature integration from local to global.
To best learn the patterns from the graph-tree-images, we propose TreeRNN, a 2D RNN architecture that recurrently integrates the image pixels by rows and columns to help classify the graph categories.
arXiv Detail & Related papers (2020-06-21T15:22:24Z) - Graph Neural Networks with Composite Kernels [60.81504431653264]
We re-interpret node aggregation from the perspective of kernel weighting.
We present a framework to consider feature similarity in an aggregation scheme.
We propose feature aggregation as the composition of the original neighbor-based kernel and a learnable kernel to encode feature similarities in a feature space.
arXiv Detail & Related papers (2020-05-16T04:44:29Z) - Iterative Context-Aware Graph Inference for Visual Dialog [126.016187323249]
We propose a novel Context-Aware Graph (CAG) neural network.
Each node in the graph corresponds to a joint semantic feature, including both object-based (visual) and history-related (textual) context representations.
arXiv Detail & Related papers (2020-04-05T13:09:37Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z) - Graph Inference Learning for Semi-supervised Classification [50.55765399527556]
We propose a Graph Inference Learning framework to boost the performance of semi-supervised node classification.
For learning the inference process, we introduce meta-optimization on structure relations from training nodes to validation nodes.
Comprehensive evaluations on four benchmark datasets demonstrate the superiority of our proposed GIL when compared against state-of-the-art methods.
arXiv Detail & Related papers (2020-01-17T02:52:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.