Modeling Global and Local Node Contexts for Text Generation from
Knowledge Graphs
- URL: http://arxiv.org/abs/2001.11003v2
- Date: Mon, 22 Jun 2020 16:34:10 GMT
- Title: Modeling Global and Local Node Contexts for Text Generation from
Knowledge Graphs
- Authors: Leonardo F. R. Ribeiro, Yue Zhang, Claire Gardent and Iryna Gurevych
- Abstract summary: Recent graph-to-text models generate text from graph-based data using either global or local aggregation.
We propose novel neural models which encode an input graph combining both global and local node contexts.
Our approaches lead to significant improvements on two graph-to-text datasets.
- Score: 63.12058935995516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent graph-to-text models generate text from graph-based data using either
global or local aggregation to learn node representations. Global node encoding
allows explicit communication between two distant nodes, thereby neglecting
graph topology as all nodes are directly connected. In contrast, local node
encoding considers the relations between neighbor nodes capturing the graph
structure, but it can fail to capture long-range relations. In this work, we
gather both encoding strategies, proposing novel neural models which encode an
input graph combining both global and local node contexts, in order to learn
better contextualized node embeddings. In our experiments, we demonstrate that
our approaches lead to significant improvements on two graph-to-text datasets
achieving BLEU scores of 18.01 on AGENDA dataset, and 63.69 on the WebNLG
dataset for seen categories, outperforming state-of-the-art models by 3.7 and
3.1 points, respectively.
Related papers
- Differential Encoding for Improved Representation Learning over Graphs [15.791455338513815]
A message-passing paradigm and a global attention mechanism fundamentally generate node embeddings.
It is unknown if the dominant information is from a node itself or from the node's neighbours.
We present a differential encoding method to address the issue of information lost.
arXiv Detail & Related papers (2024-07-03T02:23:33Z) - Bridging Local Details and Global Context in Text-Attributed Graphs [62.522550655068336]
GraphBridge is a framework that bridges local and global perspectives by leveraging contextual textual information.
Our method achieves state-of-theart performance, while our graph-aware token reduction module significantly enhances efficiency and solves scalability issues.
arXiv Detail & Related papers (2024-06-18T13:35:25Z) - Graph Transformer GANs with Graph Masked Modeling for Architectural
Layout Generation [153.92387500677023]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The proposed graph Transformer encoder combines graph convolutions and self-attentions in a Transformer to model both local and global interactions.
We also propose a novel self-guided pre-training method for graph representation learning.
arXiv Detail & Related papers (2024-01-15T14:36:38Z) - Local Structure-aware Graph Contrastive Representation Learning [12.554113138406688]
We propose a Local Structure-aware Graph Contrastive representation Learning method (LS-GCL) to model the structural information of nodes from multiple views.
For the local view, the semantic subgraph of each target node is input into a shared GNN encoder to obtain the target node embeddings at the subgraph-level.
For the global view, considering the original graph preserves indispensable semantic information of nodes, we leverage the shared GNN encoder to learn the target node embeddings at the global graph-level.
arXiv Detail & Related papers (2023-08-07T03:23:46Z) - DigNet: Digging Clues from Local-Global Interactive Graph for
Aspect-level Sentiment Classification [0.685316573653194]
In aspect-level sentiment classification (ASC), state-of-the-art models encode either syntax graph or relation graph.
We design a novel local-global interactive graph, which marries their advantages by stitching the two graphs via interactive edges.
In this paper, we propose a novel neural network termed DigNet, whose core module is the stacked local-global interactive layers.
arXiv Detail & Related papers (2022-01-04T05:34:02Z) - Node-wise Localization of Graph Neural Networks [52.04194209002702]
Graph neural networks (GNNs) emerge as a powerful family of representation learning models on graphs.
We propose a node-wise localization of GNNs by accounting for both global and local aspects of the graph.
We conduct extensive experiments on four benchmark graphs, and consistently obtain promising performance surpassing the state-of-the-art GNNs.
arXiv Detail & Related papers (2021-10-27T10:02:03Z) - GraphFormers: GNN-nested Transformers for Representation Learning on
Textual Graph [53.70520466556453]
We propose GraphFormers, where layerwise GNN components are nested alongside the transformer blocks of language models.
With the proposed architecture, the text encoding and the graph aggregation are fused into an iterative workflow.
In addition, a progressive learning strategy is introduced, where the model is successively trained on manipulated data and original data to reinforce its capability of integrating information on graph.
arXiv Detail & Related papers (2021-05-06T12:20:41Z) - Modeling Graph Structure via Relative Position for Text Generation from
Knowledge Graphs [54.176285420428776]
We present Graformer, a novel Transformer-based encoder-decoder architecture for graph-to-text generation.
With our novel graph self-attention, the encoding of a node relies on all nodes in the input graph - not only direct neighbors - facilitating the detection of global patterns.
Graformer learns to weight these node-node relations differently for different attention heads, thus virtually learning differently connected views of the input graph.
arXiv Detail & Related papers (2020-06-16T15:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.