Fast Sequence Generation with Multi-Agent Reinforcement Learning
- URL: http://arxiv.org/abs/2101.09698v1
- Date: Sun, 24 Jan 2021 12:16:45 GMT
- Title: Fast Sequence Generation with Multi-Agent Reinforcement Learning
- Authors: Longteng Guo, Jing Liu, Xinxin Zhu, Hanqing Lu
- Abstract summary: Non-autoregressive decoding has been proposed in machine translation to speed up the inference time by generating all words in parallel.
We propose a simple and efficient model for Non-Autoregressive sequence Generation (NAG) with a novel training paradigm: Counterfactuals-critical Multi-Agent Learning (CMAL)
On MSCOCO image captioning benchmark, our NAG method achieves a performance comparable to state-of-the-art autoregressive models, while brings 13.9x decoding speedup.
- Score: 40.75211414663022
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autoregressive sequence Generation models have achieved state-of-the-art
performance in areas like machine translation and image captioning. These
models are autoregressive in that they generate each word by conditioning on
previously generated words, which leads to heavy latency during inference.
Recently, non-autoregressive decoding has been proposed in machine translation
to speed up the inference time by generating all words in parallel. Typically,
these models use the word-level cross-entropy loss to optimize each word
independently. However, such a learning process fails to consider the
sentence-level consistency, thus resulting in inferior generation quality of
these non-autoregressive models. In this paper, we propose a simple and
efficient model for Non-Autoregressive sequence Generation (NAG) with a novel
training paradigm: Counterfactuals-critical Multi-Agent Learning (CMAL). CMAL
formulates NAG as a multi-agent reinforcement learning system where element
positions in the target sequence are viewed as agents that learn to
cooperatively maximize a sentence-level reward. On MSCOCO image captioning
benchmark, our NAG method achieves a performance comparable to state-of-the-art
autoregressive models, while brings 13.9x decoding speedup. On WMT14 EN-DE
machine translation dataset, our method outperforms cross-entropy trained
baseline by 6.0 BLEU points while achieves the greatest decoding speedup of
17.46x.
Related papers
- Non-autoregressive Sequence-to-Sequence Vision-Language Models [63.77614880533488]
We propose a parallel decoding sequence-to-sequence vision-language model that marginalizes over multiple inference paths in the decoder.
The model achieves performance on-par with its state-of-the-art autoregressive counterpart, but is faster at inference time.
arXiv Detail & Related papers (2024-03-04T17:34:59Z) - Improving Non-autoregressive Generation with Mixup Training [51.61038444990301]
We present a non-autoregressive generation model based on pre-trained transformer models.
We propose a simple and effective iterative training method called MIx Source and pseudo Target.
Our experiments on three generation benchmarks including question generation, summarization and paraphrase generation, show that the proposed framework achieves the new state-of-the-art results.
arXiv Detail & Related papers (2021-10-21T13:04:21Z) - Semi-Autoregressive Image Captioning [153.9658053662605]
Current state-of-the-art approaches for image captioning typically adopt an autoregressive manner.
Non-autoregressive image captioning with continuous iterative refinement can achieve comparable performance to the autoregressive counterparts with a considerable acceleration.
We propose a novel two-stage framework, referred to as Semi-Autoregressive Image Captioning (SAIC) to make a better trade-off between performance and speed.
arXiv Detail & Related papers (2021-10-11T15:11:54Z) - Non-Autoregressive Image Captioning with Counterfactuals-Critical
Multi-Agent Learning [46.060954649681385]
We propose a Non-Autoregressive Image Captioning model with a novel training paradigm: Counterfactuals-critical Multi-Agent Learning (CMAL)
Our NAIC model achieves a performance comparable to state-of-the-art autoregressive models, while brings 13.9x decoding speedup.
arXiv Detail & Related papers (2020-05-10T15:09:44Z) - Aligned Cross Entropy for Non-Autoregressive Machine Translation [120.15069387374717]
We propose aligned cross entropy (AXE) as an alternative loss function for training of non-autoregressive models.
AXE-based training of conditional masked language models (CMLMs) substantially improves performance on major WMT benchmarks.
arXiv Detail & Related papers (2020-04-03T16:24:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.