GTrans: Grouping and Fusing Transformer Layers for Neural Machine
Translation
- URL: http://arxiv.org/abs/2207.14467v1
- Date: Fri, 29 Jul 2022 04:10:36 GMT
- Title: GTrans: Grouping and Fusing Transformer Layers for Neural Machine
Translation
- Authors: Jian Yang, Yuwei Yin, Shuming Ma, Haoyang Huang, Dongdong Zhang, Furu
Wei and Zhoujun Li
- Abstract summary: Transformer structure, stacked by a sequence of encoder and decoder network layers, achieves significant development in neural machine translation.
We propose the Group-Transformer model (GTrans) that flexibly divides multi-layer representations of both encoder and decoder into different groups and then fuses these group features to generate target words.
- Score: 107.2752114891855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transformer structure, stacked by a sequence of encoder and decoder network
layers, achieves significant development in neural machine translation.
However, vanilla Transformer mainly exploits the top-layer representation,
assuming the lower layers provide trivial or redundant information and thus
ignoring the bottom-layer feature that is potentially valuable. In this work,
we propose the Group-Transformer model (GTrans) that flexibly divides
multi-layer representations of both encoder and decoder into different groups
and then fuses these group features to generate target words. To corroborate
the effectiveness of the proposed method, extensive experiments and analytic
experiments are conducted on three bilingual translation benchmarks and two
multilingual translation tasks, including the IWLST-14, IWLST-17, LDC, WMT-14
and OPUS-100 benchmark. Experimental and analytical results demonstrate that
our model outperforms its Transformer counterparts by a consistent gain.
Furthermore, it can be successfully scaled up to 60 encoder layers and 36
decoder layers.
Related papers
- Quick Back-Translation for Unsupervised Machine Translation [9.51657235413336]
We propose a two-for-one improvement to Transformer back-translation: Quick Back-Translation (QBT)
QBT re-purposes the encoder as a generative model, and uses encoder-generated sequences to train the decoder.
Experiments on various WMT benchmarks demonstrate that QBT dramatically outperforms standard back-translation only method in terms of training efficiency.
arXiv Detail & Related papers (2023-12-01T20:27:42Z) - Transformer over Pre-trained Transformer for Neural Text Segmentation
with Enhanced Topic Coherence [6.73258176462356]
It consists of two components: bottom-level sentence encoders using pre-trained transformers, and an upper-level transformer-based segmentation model based on the sentence embeddings.
Our experiments show that Transformer$2$ manages to surpass state-of-the-art text segmentation models in terms of a commonly-used semantic coherence measure.
arXiv Detail & Related papers (2021-10-14T05:26:39Z) - Multi-Unit Transformers for Neural Machine Translation [51.418245676894465]
We propose the Multi-Unit Transformers (MUTE) to promote the expressiveness of the Transformer.
Specifically, we use several parallel units and show that modeling with multiple units improves model performance and introduces diversity.
arXiv Detail & Related papers (2020-10-21T03:41:49Z) - On the Sub-Layer Functionalities of Transformer Decoder [74.83087937309266]
We study how Transformer-based decoders leverage information from the source and target languages.
Based on these insights, we demonstrate that the residual feed-forward module in each Transformer decoder layer can be dropped with minimal loss of performance.
arXiv Detail & Related papers (2020-10-06T11:50:54Z) - Rewiring the Transformer with Depth-Wise LSTMs [55.50278212605607]
We present a Transformer with depth-wise LSTMs connecting cascading Transformer layers and sub-layers.
Experiments with the 6-layer Transformer show significant BLEU improvements in both WMT 14 English-German / French tasks and the OPUS-100 many-to-many multilingual NMT task.
arXiv Detail & Related papers (2020-07-13T09:19:34Z) - Segatron: Segment-Aware Transformer for Language Modeling and
Understanding [79.84562707201323]
We propose a segment-aware Transformer (Segatron) to generate better contextual representations from sequential tokens.
We first introduce the segment-aware mechanism to Transformer-XL, which is a popular Transformer-based language model.
We find that our method can further improve the Transformer-XL base model and large model, achieving 17.1 perplexity on the WikiText-103 dataset.
arXiv Detail & Related papers (2020-04-30T17:38:27Z) - Probing Word Translations in the Transformer and Trading Decoder for
Encoder Layers [69.40942736249397]
The way word translation evolves in Transformer layers has not yet been investigated.
We show that translation already happens progressively in encoder layers and even in the input embeddings.
Our experiments show that we can increase speed by up to a factor 2.3 with small gains in translation quality, while an 18-4 deep encoder configuration boosts translation quality by +1.42 BLEU (En-De) at a speed-up of 1.4.
arXiv Detail & Related papers (2020-03-21T06:12:14Z) - Hierarchical Transformer Network for Utterance-level Emotion Recognition [0.0]
We address some challenges in utter-ance-level emotion recognition (ULER)
Unlike the traditional text classification problem, this task is supported by a limited number of datasets.
We use a pretrained language model bidirectional encoder representa-tions from transformers (BERT) as the lower-level transformer.
In addition, we add speaker embeddings to the model for the first time, which enables our model to capture the in-teraction between speakers.
arXiv Detail & Related papers (2020-02-18T13:44:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.