Residual Tree Aggregation of Layers for Neural Machine Translation
- URL: http://arxiv.org/abs/2107.14590v1
- Date: Mon, 19 Jul 2021 09:32:10 GMT
- Title: Residual Tree Aggregation of Layers for Neural Machine Translation
- Authors: GuoLiang Li and Yiyang Li
- Abstract summary: We propose a residual tree aggregation of layers for Transformer(RTAL), which helps to fuse information across layers.
Specifically, we try to fuse the information across layers by constructing a post-order binary tree.
Our model is based on the Neural Machine Translation model Transformer and we conduct experiments on WMT14 English-to-German and WMT17 English-to-France translation tasks.
- Score: 11.660776324473645
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Although attention-based Neural Machine Translation has achieved remarkable
progress in recent layers, it still suffers from issue of making insufficient
use of the output of each layer. In transformer, it only uses the top layer of
encoder and decoder in the subsequent process, which makes it impossible to
take advantage of the useful information in other layers. To address this
issue, we propose a residual tree aggregation of layers for Transformer(RTAL),
which helps to fuse information across layers. Specifically, we try to fuse the
information across layers by constructing a post-order binary tree. In
additional to the last node, we add the residual connection to the process of
generating child nodes. Our model is based on the Neural Machine Translation
model Transformer and we conduct our experiments on WMT14 English-to-German and
WMT17 English-to-France translation tasks. Experimental results across language
pairs show that the proposed approach outperforms the strong baseline model
significantly
Related papers
- GTrans: Grouping and Fusing Transformer Layers for Neural Machine
Translation [107.2752114891855]
Transformer structure, stacked by a sequence of encoder and decoder network layers, achieves significant development in neural machine translation.
We propose the Group-Transformer model (GTrans) that flexibly divides multi-layer representations of both encoder and decoder into different groups and then fuses these group features to generate target words.
arXiv Detail & Related papers (2022-07-29T04:10:36Z) - Recurrent Stacking of Layers in Neural Networks: An Application to
Neural Machine Translation [18.782750537161615]
We propose to share parameters across all layers thereby leading to a recurrently stacked neural network model.
We empirically demonstrate that the translation quality of a model that recurrently stacks a single layer 6 times, despite having significantly fewer parameters, approaches that of a model that stacks 6 layers where each layer has different parameters.
arXiv Detail & Related papers (2021-06-18T08:48:01Z) - Exploring Unsupervised Pretraining Objectives for Machine Translation [99.5441395624651]
Unsupervised cross-lingual pretraining has achieved strong results in neural machine translation (NMT)
Most approaches adapt masked-language modeling (MLM) to sequence-to-sequence architectures, by masking parts of the input and reconstructing them in the decoder.
We compare masking with alternative objectives that produce inputs resembling real (full) sentences, by reordering and replacing words based on their context.
arXiv Detail & Related papers (2021-06-10T10:18:23Z) - IOT: Instance-wise Layer Reordering for Transformer Structures [173.39918590438245]
We break the assumption of the fixed layer order in the Transformer and introduce instance-wise layer reordering into the model structure.
Our method can also be applied to other architectures beyond Transformer.
arXiv Detail & Related papers (2021-03-05T03:44:42Z) - Meta Back-translation [111.87397401837286]
We propose a novel method to generate pseudo-parallel data from a pre-trained back-translation model.
Our method is a meta-learning algorithm which adapts a pre-trained back-translation model so that the pseudo-parallel data it generates would train a forward-translation model to do well on a validation set.
arXiv Detail & Related papers (2021-02-15T20:58:32Z) - Transition based Graph Decoder for Neural Machine Translation [41.7284715234202]
We propose a general Transformer-based approach for tree and graph decoding based on generating a sequence of transitions.
We show improved performance over the standard Transformer decoder, as well as over ablated versions of the model.
arXiv Detail & Related papers (2021-01-29T15:20:45Z) - Long-Short Term Masking Transformer: A Simple but Effective Baseline for
Document-level Neural Machine Translation [28.94748226472447]
We study the pros and cons of the standard transformer in document-level translation.
We propose a surprisingly simple long-short term masking self-attention on top of the standard transformer.
We can achieve a strong result in BLEU and capture discourse phenomena.
arXiv Detail & Related papers (2020-09-19T00:29:51Z) - Glancing Transformer for Non-Autoregressive Neural Machine Translation [58.87258329683682]
We propose a method to learn word interdependency for single-pass parallel generation models.
With only single-pass parallel decoding, GLAT is able to generate high-quality translation with 8-15 times speedup.
arXiv Detail & Related papers (2020-08-18T13:04:03Z) - Character-level Transformer-based Neural Machine Translation [5.699756532377753]
We discuss a novel, Transformer-based approach, that we compare, both in speed and in quality to the Transformer at subword and character levels.
We evaluate our models on 4 language pairs from WMT'15: DE-EN, CS-EN, FI-EN and RU-EN.
The proposed novel architecture can be trained on a single GPU and is 34% percent faster than the character-level Transformer.
arXiv Detail & Related papers (2020-05-22T15:40:43Z) - Rethinking and Improving Natural Language Generation with Layer-Wise
Multi-View Decoding [59.48857453699463]
In sequence-to-sequence learning, the decoder relies on the attention mechanism to efficiently extract information from the encoder.
Recent work has proposed to use representations from different encoder layers for diversified levels of information.
We propose layer-wise multi-view decoding, where for each decoder layer, together with the representations from the last encoder layer, which serve as a global view, those from other encoder layers are supplemented for a stereoscopic view of the source sequences.
arXiv Detail & Related papers (2020-05-16T20:00:39Z) - Multi-layer Representation Fusion for Neural Machine Translation [38.12309528346962]
We propose a multi-layer representation fusion (MLRF) approach to fusing stacked layers.
In particular, we design three fusion functions to learn a better representation from the stack.
The result is new state-of-the-art in German-English translation.
arXiv Detail & Related papers (2020-02-16T23:53:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.