Online Back-Parsing for AMR-to-Text Generation
- URL: http://arxiv.org/abs/2010.04520v1
- Date: Fri, 9 Oct 2020 12:08:14 GMT
- Title: Online Back-Parsing for AMR-to-Text Generation
- Authors: Xuefeng Bai, Linfeng Song and Yue Zhang
- Abstract summary: AMR-to-text generation aims to recover a text containing the same meaning as an input AMR graph.
We propose a decoder that back predicts projected AMR graphs on the target sentence during text generation.
- Score: 29.12944601513491
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: AMR-to-text generation aims to recover a text containing the same meaning as
an input AMR graph. Current research develops increasingly powerful graph
encoders to better represent AMR graphs, with decoders based on standard
language modeling being used to generate outputs. We propose a decoder that
back predicts projected AMR graphs on the target sentence during text
generation. As the result, our outputs can better preserve the input meaning
than standard decoders. Experiments on two AMR benchmarks show the superiority
of our model over the previous state-of-the-art system based on graph
Transformer.
Related papers
- $ε$-VAE: Denoising as Visual Decoding [61.29255979767292]
In generative modeling, tokenization simplifies complex data into compact, structured representations, creating a more efficient, learnable space.
Current visual tokenization methods rely on a traditional autoencoder framework, where the encoder compresses data into latent representations, and the decoder reconstructs the original input.
We propose denoising as decoding, shifting from single-step reconstruction to iterative refinement. Specifically, we replace the decoder with a diffusion process that iteratively refines noise to recover the original image, guided by the latents provided by the encoder.
We evaluate our approach by assessing both reconstruction (rFID) and generation quality (
arXiv Detail & Related papers (2024-10-05T08:27:53Z) - mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval [67.50604814528553]
We first introduce a text encoder enhanced with RoPE and unpadding, pre-trained in a native 8192-token context.
Then we construct a hybrid TRM and a cross-encoder reranker by contrastive learning.
arXiv Detail & Related papers (2024-07-29T03:12:28Z) - AMR Parsing with Causal Hierarchical Attention and Pointers [54.382865897298046]
We introduce new target forms of AMR parsing and a novel model, CHAP, which is equipped with causal hierarchical attention and the pointer mechanism.
Experiments show that our model outperforms baseline models on four out of five benchmarks in the setting of no additional data.
arXiv Detail & Related papers (2023-10-18T13:44:26Z) - Incorporating Graph Information in Transformer-based AMR Parsing [34.461828101932184]
LeakDistill is a model and method that explores a modification to the Transformer architecture.
We show how, by employing word-to-node alignment to embed graph structural information into the encoder at training time, we can obtain state-of-the-art AMR parsing.
arXiv Detail & Related papers (2023-06-23T12:12:08Z) - Neural Machine Translation with Dynamic Graph Convolutional Decoder [32.462919670070654]
We propose an end-to-end translation architecture from the (graph & sequence) structural inputs to the (graph & sequence) outputs, where the target translation and its corresponding syntactic graph are jointly modeled and generated.
We conduct extensive experiments on five widely acknowledged translation benchmarks, verifying our proposal achieves consistent improvements over baselines and other syntax-aware variants.
arXiv Detail & Related papers (2023-05-28T11:58:07Z) - Diffsound: Discrete Diffusion Model for Text-to-sound Generation [78.4128796899781]
We propose a novel text-to-sound generation framework that consists of a text encoder, a Vector Quantized Variational Autoencoder (VQ-VAE), a decoder, and a vocoder.
The framework first uses the decoder to transfer the text features extracted from the text encoder to a mel-spectrogram with the help of VQ-VAE, and then the vocoder is used to transform the generated mel-spectrogram into a waveform.
arXiv Detail & Related papers (2022-07-20T15:41:47Z) - Graph Pre-training for AMR Parsing and Generation [14.228434699363495]
We investigate graph self-supervised training to improve structure awareness of PLMs over AMR graphs.
We introduce two graph auto-encoding strategies for graph-to-graph pre-training and four tasks to integrate text and graph information during pre-training.
arXiv Detail & Related papers (2022-03-15T12:47:00Z) - Levi Graph AMR Parser using Heterogeneous Attention [17.74208462902158]
This paper presents a novel approach to AMR parsing by combining heterogeneous data (tokens, concepts, labels) as one input to a transformer to learn attention.
Although our models use significantly fewer parameters than the previous state-of-the-art graph, they show similar or better accuracy on AMR 2.0 and 3.0.
arXiv Detail & Related papers (2021-07-09T00:06:17Z) - Lightweight, Dynamic Graph Convolutional Networks for AMR-to-Text
Generation [56.73834525802723]
Lightweight Dynamic Graph Convolutional Networks (LDGCNs) are proposed.
LDGCNs capture richer non-local interactions by synthesizing higher order information from the input graphs.
We develop two novel parameter saving strategies based on the group graph convolutions and weight tied convolutions to reduce memory usage and model complexity.
arXiv Detail & Related papers (2020-10-09T06:03:46Z) - Streaming automatic speech recognition with the transformer model [59.58318952000571]
We propose a transformer based end-to-end ASR system for streaming ASR.
We apply time-restricted self-attention for the encoder and triggered attention for the encoder-decoder attention mechanism.
Our proposed streaming transformer architecture achieves 2.8% and 7.2% WER for the "clean" and "other" test data of LibriSpeech.
arXiv Detail & Related papers (2020-01-08T18:58:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.