Modeling Graph Structure via Relative Position for Text Generation from
Knowledge Graphs
- URL: http://arxiv.org/abs/2006.09242v3
- Date: Tue, 27 Apr 2021 09:13:08 GMT
- Title: Modeling Graph Structure via Relative Position for Text Generation from
Knowledge Graphs
- Authors: Martin Schmitt, Leonardo F. R. Ribeiro, Philipp Dufter, Iryna
Gurevych, Hinrich Sch\"utze
- Abstract summary: We present Graformer, a novel Transformer-based encoder-decoder architecture for graph-to-text generation.
With our novel graph self-attention, the encoding of a node relies on all nodes in the input graph - not only direct neighbors - facilitating the detection of global patterns.
Graformer learns to weight these node-node relations differently for different attention heads, thus virtually learning differently connected views of the input graph.
- Score: 54.176285420428776
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Graformer, a novel Transformer-based encoder-decoder architecture
for graph-to-text generation. With our novel graph self-attention, the encoding
of a node relies on all nodes in the input graph - not only direct neighbors -
facilitating the detection of global patterns. We represent the relation
between two nodes as the length of the shortest path between them. Graformer
learns to weight these node-node relations differently for different attention
heads, thus virtually learning differently connected views of the input graph.
We evaluate Graformer on two popular graph-to-text generation benchmarks,
AGENDA and WebNLG, where it achieves strong performance while using many fewer
parameters than other approaches.
Related papers
- Graph Transformer GANs with Graph Masked Modeling for Architectural
Layout Generation [153.92387500677023]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The proposed graph Transformer encoder combines graph convolutions and self-attentions in a Transformer to model both local and global interactions.
We also propose a novel self-guided pre-training method for graph representation learning.
arXiv Detail & Related papers (2024-01-15T14:36:38Z) - Half-Hop: A graph upsampling approach for slowing down message passing [31.26080679115766]
We introduce a framework for improving learning in message passing neural networks.
Our approach essentially upsamples edges in the original graph by adding "slow nodes" at each edge.
Our method only modifies the input graph, making it plug-and-play and easy to use with existing models.
arXiv Detail & Related papers (2023-08-17T22:24:15Z) - Graph Transformer GANs for Graph-Constrained House Generation [223.739067413952]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The GTGAN learns effective graph node relations in an end-to-end fashion for the challenging graph-constrained house generation task.
arXiv Detail & Related papers (2023-03-14T20:35:45Z) - Stage-wise Fine-tuning for Graph-to-Text Generation [25.379346921398326]
Graph-to-text generation has benefited from pre-trained language models (PLMs) in achieving better performance than structured graph encoders.
We propose a structured graph-to-text model with a two-step fine-tuning mechanism which first fine-tunes model on Wikipedia before adapting to the graph-to-text generation.
arXiv Detail & Related papers (2021-05-17T17:15:29Z) - Iterative Context-Aware Graph Inference for Visual Dialog [126.016187323249]
We propose a novel Context-Aware Graph (CAG) neural network.
Each node in the graph corresponds to a joint semantic feature, including both object-based (visual) and history-related (textual) context representations.
arXiv Detail & Related papers (2020-04-05T13:09:37Z) - Modeling Global and Local Node Contexts for Text Generation from
Knowledge Graphs [63.12058935995516]
Recent graph-to-text models generate text from graph-based data using either global or local aggregation.
We propose novel neural models which encode an input graph combining both global and local node contexts.
Our approaches lead to significant improvements on two graph-to-text datasets.
arXiv Detail & Related papers (2020-01-29T18:24:14Z) - Bridging Knowledge Graphs to Generate Scene Graphs [49.69377653925448]
We propose a novel graph-based neural network that iteratively propagates information between the two graphs, as well as within each of them.
Our Graph Bridging Network, GB-Net, successively infers edges and nodes, allowing to simultaneously exploit and refine the rich, heterogeneous structure of the interconnected scene and commonsense graphs.
arXiv Detail & Related papers (2020-01-07T23:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.