On the Learning of Non-Autoregressive Transformers
- URL: http://arxiv.org/abs/2206.05975v1
- Date: Mon, 13 Jun 2022 08:42:09 GMT
- Title: On the Learning of Non-Autoregressive Transformers
- Authors: Fei Huang, Tianhua Tao, Hao Zhou, Lei Li, Minlie Huang
- Abstract summary: Non-autoregressive Transformer (NAT) is a family of text generation models.
We present theoretical and empirical analyses to reveal the challenges of NAT learning.
- Score: 91.34196047466904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-autoregressive Transformer (NAT) is a family of text generation models,
which aims to reduce the decoding latency by predicting the whole sentences in
parallel. However, such latency reduction sacrifices the ability to capture
left-to-right dependencies, thereby making NAT learning very challenging. In
this paper, we present theoretical and empirical analyses to reveal the
challenges of NAT learning and propose a unified perspective to understand
existing successes. First, we show that simply training NAT by maximizing the
likelihood can lead to an approximation of marginal distributions but drops all
dependencies between tokens, where the dropped information can be measured by
the dataset's conditional total correlation. Second, we formalize many previous
objectives in a unified framework and show that their success can be concluded
as maximizing the likelihood on a proxy distribution, leading to a reduced
information loss. Empirical studies show that our perspective can explain the
phenomena in NAT learning and guide the design of new training methods.
Related papers
- Learning Latent Graph Structures and their Uncertainty [63.95971478893842]
Graph Neural Networks (GNNs) use relational information as an inductive bias to enhance the model's accuracy.
As task-relevant relations might be unknown, graph structure learning approaches have been proposed to learn them while solving the downstream prediction task.
arXiv Detail & Related papers (2024-05-30T10:49:22Z) - Probabilistically Rewired Message-Passing Neural Networks [41.554499944141654]
Message-passing graph neural networks (MPNNs) emerged as powerful tools for processing graph-structured input.
MPNNs operate on a fixed input graph structure, ignoring potential noise and missing information.
We devise probabilistically rewired MPNNs (PR-MPNNs) which learn to add relevant edges while omitting less beneficial ones.
arXiv Detail & Related papers (2023-10-03T15:43:59Z) - Optimizing Non-Autoregressive Transformers with Contrastive Learning [74.46714706658517]
Non-autoregressive Transformers (NATs) reduce the inference latency of Autoregressive Transformers (ATs) by predicting words all at once rather than in sequential order.
In this paper, we propose to ease the difficulty of modality learning via sampling from the model distribution instead of the data distribution.
arXiv Detail & Related papers (2023-05-23T04:20:13Z) - Selective Knowledge Distillation for Non-Autoregressive Neural Machine
Translation [34.22251326493591]
The Non-Autoregressive Transformer (NAT) achieves great success in neural machine translation tasks.
Existing knowledge distillation has side effects, such as propagating errors from the teacher to NAT students.
We introduce selective knowledge distillation by introducing an NAT to select NAT-friendly targets that are of high quality and easy to learn.
arXiv Detail & Related papers (2023-03-31T09:16:13Z) - Less is More: Rethinking Few-Shot Learning and Recurrent Neural Nets [2.824895388993495]
We provide theoretical guarantees for reliable learning under the information-theoretic AEP.
We then focus on a highly efficient recurrent neural net (RNN) framework and propose a reduced-entropy algorithm for few-shot learning.
Our experimental results demonstrate significant potential for improving learning models' sample efficiency, generalization, and time complexity.
arXiv Detail & Related papers (2022-09-28T17:33:11Z) - Sequence-Level Training for Non-Autoregressive Neural Machine
Translation [33.17341980163439]
Non-Autoregressive Neural Machine Translation (NAT) removes the autoregressive mechanism and achieves significant decoding speedup.
We propose using sequence-level training objectives to train NAT models, which evaluate the NAT outputs as a whole and correlates well with the real translation quality.
arXiv Detail & Related papers (2021-06-15T13:30:09Z) - Fully Non-autoregressive Neural Machine Translation: Tricks of the Trade [47.97977478431973]
Fully non-autoregressive neural machine translation (NAT) is proposed to simultaneously predict tokens with single forward of neural networks.
In this work, we target on closing the performance gap while maintaining the latency advantage.
arXiv Detail & Related papers (2020-12-31T18:52:59Z) - Understanding and Improving Lexical Choice in Non-Autoregressive
Translation [98.11249019844281]
We propose to expose the raw data to NAT models to restore the useful information of low-frequency words.
Our approach pushes the SOTA NAT performance on the WMT14 English-German and WMT16 Romanian-English datasets up to 27.8 and 33.8 BLEU points, respectively.
arXiv Detail & Related papers (2020-12-29T03:18:50Z) - A Simple but Tough-to-Beat Data Augmentation Approach for Natural
Language Understanding and Generation [53.8171136907856]
We introduce a set of simple yet effective data augmentation strategies dubbed cutoff.
cutoff relies on sampling consistency and thus adds little computational overhead.
cutoff consistently outperforms adversarial training and achieves state-of-the-art results on the IWSLT2014 German-English dataset.
arXiv Detail & Related papers (2020-09-29T07:08:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.