Promoting Graph Awareness in Linearized Graph-to-Text Generation
- URL: http://arxiv.org/abs/2012.15793v1
- Date: Thu, 31 Dec 2020 18:17:57 GMT
- Title: Promoting Graph Awareness in Linearized Graph-to-Text Generation
- Authors: Alexander Hoyle, Ana Marasovi\'c, Noah Smith
- Abstract summary: We study the ability of linearized models to encode local graph structures.
Our findings motivate solutions to enrich the quality of models' implicit graph encodings.
We find that these denoising scaffolds lead to substantial improvements in downstream generation in low-resource settings.
- Score: 72.83863719868364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating text from structured inputs, such as meaning representations or
RDF triples, has often involved the use of specialized graph-encoding neural
networks. However, recent applications of pretrained transformers to
linearizations of graph inputs have yielded state-of-the-art generation results
on graph-to-text tasks. Here, we explore the ability of these linearized models
to encode local graph structures, in particular their invariance to the graph
linearization strategy and their ability to reconstruct corrupted inputs. Our
findings motivate solutions to enrich the quality of models' implicit graph
encodings via scaffolding. Namely, we use graph-denoising objectives
implemented in a multi-task text-to-text framework. We find that these
denoising scaffolds lead to substantial improvements in downstream generation
in low-resource settings.
Related papers
- Greener GRASS: Enhancing GNNs with Encoding, Rewiring, and Attention [12.409982249220812]
We introduce Graph Attention with Structures (GRASS), a novel GNN architecture, to enhance graph relative attention.
GRASS rewires the input graph by superimposing a random regular graph to achieve long-range information propagation.
It also employs a novel additive attention mechanism tailored for graph-structured data.
arXiv Detail & Related papers (2024-07-08T06:21:56Z) - Deep Prompt Tuning for Graph Transformers [55.2480439325792]
Fine-tuning is resource-intensive and requires storing multiple copies of large models.
We propose a novel approach called deep graph prompt tuning as an alternative to fine-tuning.
By freezing the pre-trained parameters and only updating the added tokens, our approach reduces the number of free parameters and eliminates the need for multiple model copies.
arXiv Detail & Related papers (2023-09-18T20:12:17Z) - Explanation Graph Generation via Generative Pre-training over Synthetic
Graphs [6.25568933262682]
The generation of explanation graphs is a significant task that aims to produce explanation graphs in response to user input.
Current research commonly fine-tunes a text-based pre-trained language model on a small downstream dataset that is annotated with labeled graphs.
We propose a novel pre-trained framework EG3P(for Explanation Graph Generation via Generative Pre-training over synthetic graphs) for the explanation graph generation task.
arXiv Detail & Related papers (2023-06-01T13:20:22Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Stage-wise Fine-tuning for Graph-to-Text Generation [25.379346921398326]
Graph-to-text generation has benefited from pre-trained language models (PLMs) in achieving better performance than structured graph encoders.
We propose a structured graph-to-text model with a two-step fine-tuning mechanism which first fine-tunes model on Wikipedia before adapting to the graph-to-text generation.
arXiv Detail & Related papers (2021-05-17T17:15:29Z) - GraphFormers: GNN-nested Transformers for Representation Learning on
Textual Graph [53.70520466556453]
We propose GraphFormers, where layerwise GNN components are nested alongside the transformer blocks of language models.
With the proposed architecture, the text encoding and the graph aggregation are fused into an iterative workflow.
In addition, a progressive learning strategy is introduced, where the model is successively trained on manipulated data and original data to reinforce its capability of integrating information on graph.
arXiv Detail & Related papers (2021-05-06T12:20:41Z) - Structural Information Preserving for Graph-to-Text Generation [59.00642847499138]
The task of graph-to-text generation aims at producing sentences that preserve the meaning of input graphs.
We propose to tackle this problem by leveraging richer training signals that can guide our model for preserving input information.
Experiments on two benchmarks for graph-to-text generation show the effectiveness of our approach over a state-of-the-art baseline.
arXiv Detail & Related papers (2021-02-12T20:09:01Z) - Scene Graph Modification Based on Natural Language Commands [90.0662899539489]
Structured representations like graphs and parse trees play a crucial role in many Natural Language Processing systems.
In this paper, we explore the novel problem of graph modification, where the systems need to learn how to update an existing graph given a new user's command.
arXiv Detail & Related papers (2020-10-06T10:01:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.