Graph-to-Tree Neural Networks for Learning Structured Input-Output
Translation with Applications to Semantic Parsing and Math Word Problem
- URL: http://arxiv.org/abs/2004.13781v2
- Date: Tue, 6 Oct 2020 09:07:57 GMT
- Title: Graph-to-Tree Neural Networks for Learning Structured Input-Output
Translation with Applications to Semantic Parsing and Math Word Problem
- Authors: Shucheng Li, Lingfei Wu, Shiwei Feng, Fangli Xu, Fengyuan Xu and Sheng
Zhong
- Abstract summary: We present a novel Graph-to-Tree Neural Networks, namely Graph2Tree consisting of a graph encoder and a hierarchical tree decoder, that encodes an augmented graph-structured input and decodes a tree-structured output.
Our experiments demonstrate that our Graph2Tree model outperforms or matches the performance of other state-of-the-art models on these tasks.
- Score: 33.610361579567794
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The celebrated Seq2Seq technique and its numerous variants achieve excellent
performance on many tasks such as neural machine translation, semantic parsing,
and math word problem solving. However, these models either only consider input
objects as sequences while ignoring the important structural information for
encoding, or they simply treat output objects as sequence outputs instead of
structural objects for decoding. In this paper, we present a novel
Graph-to-Tree Neural Networks, namely Graph2Tree consisting of a graph encoder
and a hierarchical tree decoder, that encodes an augmented graph-structured
input and decodes a tree-structured output. In particular, we investigated our
model for solving two problems, neural semantic parsing and math word problem.
Our extensive experiments demonstrate that our Graph2Tree model outperforms or
matches the performance of other state-of-the-art models on these tasks.
Related papers
- NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Conversational Semantic Parsing using Dynamic Context Graphs [68.72121830563906]
We consider the task of conversational semantic parsing over general purpose knowledge graphs (KGs) with millions of entities, and thousands of relation-types.
We focus on models which are capable of interactively mapping user utterances into executable logical forms.
arXiv Detail & Related papers (2023-05-04T16:04:41Z) - Graph-to-Text Generation with Dynamic Structure Pruning [19.37474618180399]
We propose a Structure-Aware Cross-Attention (SACA) mechanism to re-encode the input graph representation conditioning on the newly generated context.
We achieve new state-of-the-art results on two graph-to-text datasets, LDC2020T02 and ENT-DESC, with only minor increase on computational cost.
arXiv Detail & Related papers (2022-09-15T12:48:10Z) - Neural Topological Ordering for Computation Graphs [23.225391263047364]
We propose an end-to-end machine learning based approach for topological ordering using an encoder-decoder framework.
We show that our model outperforms, or is on-par, with several topological ordering baselines while being significantly faster on synthetic graphs with up to 2k nodes.
arXiv Detail & Related papers (2022-07-13T00:12:02Z) - Compositionality-Aware Graph2Seq Learning [2.127049691404299]
compositionality in a graph can be associated to the compositionality in the output sequence in many graph2seq tasks.
We adopt the multi-level attention pooling (MLAP) architecture, that can aggregate graph representations from multiple levels of information localities.
We demonstrate that the model having the MLAP architecture outperform the previous state-of-the-art model with more than seven times fewer parameters.
arXiv Detail & Related papers (2022-01-28T15:22:39Z) - Computing Steiner Trees using Graph Neural Networks [1.0159681653887238]
In this paper, we tackle the Steiner Tree Problem.
We employ four learning frameworks to compute low cost Steiner trees.
Our finding suggests that the out-of-the-box application of GNN methods does worse than the classic 2-approximation algorithm.
arXiv Detail & Related papers (2021-08-18T19:55:16Z) - Structural Information Preserving for Graph-to-Text Generation [59.00642847499138]
The task of graph-to-text generation aims at producing sentences that preserve the meaning of input graphs.
We propose to tackle this problem by leveraging richer training signals that can guide our model for preserving input information.
Experiments on two benchmarks for graph-to-text generation show the effectiveness of our approach over a state-of-the-art baseline.
arXiv Detail & Related papers (2021-02-12T20:09:01Z) - Promoting Graph Awareness in Linearized Graph-to-Text Generation [72.83863719868364]
We study the ability of linearized models to encode local graph structures.
Our findings motivate solutions to enrich the quality of models' implicit graph encodings.
We find that these denoising scaffolds lead to substantial improvements in downstream generation in low-resource settings.
arXiv Detail & Related papers (2020-12-31T18:17:57Z) - Recursive Tree Grammar Autoencoders [3.791857415239352]
We propose a novel autoencoder approach that encodes trees via a bottom-up grammar and decodes trees via a tree grammar.
We show experimentally that our proposed method improves the autoencoding error, training time, and optimization score on four benchmark datasets.
arXiv Detail & Related papers (2020-12-03T17:37:25Z) - My Body is a Cage: the Role of Morphology in Graph-Based Incompatible
Control [65.77164390203396]
We present a series of ablations on existing methods that show that morphological information encoded in the graph does not improve their performance.
Motivated by the hypothesis that any benefits GNNs extract from the graph structure are outweighed by difficulties they create for message passing, we also propose Amorpheus.
arXiv Detail & Related papers (2020-10-05T08:37:11Z) - Structure-Augmented Text Representation Learning for Efficient Knowledge
Graph Completion [53.31911669146451]
Human-curated knowledge graphs provide critical supportive information to various natural language processing tasks.
These graphs are usually incomplete, urging auto-completion of them.
graph embedding approaches, e.g., TransE, learn structured knowledge via representing graph elements into dense embeddings.
textual encoding approaches, e.g., KG-BERT, resort to graph triple's text and triple-level contextualized representations.
arXiv Detail & Related papers (2020-04-30T13:50:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.