Language Models are Good Translators
- URL: http://arxiv.org/abs/2106.13627v1
- Date: Fri, 25 Jun 2021 13:30:29 GMT
- Title: Language Models are Good Translators
- Authors: Shuo Wang, Zhaopeng Tu, Zhixing Tan, Wenxuan Wang, Maosong Sun, Yang
Liu
- Abstract summary: We show that a single language model (LM4MT) can achieve comparable performance with strong encoder-decoder NMT models.
Experiments on pivot-based and zero-shot translation tasks show that LM4MT can outperform the encoder-decoder NMT model by a large margin.
- Score: 63.528370845657896
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent years have witnessed the rapid advance in neural machine translation
(NMT), the core of which lies in the encoder-decoder architecture. Inspired by
the recent progress of large-scale pre-trained language models on machine
translation in a limited scenario, we firstly demonstrate that a single
language model (LM4MT) can achieve comparable performance with strong
encoder-decoder NMT models on standard machine translation benchmarks, using
the same training data and similar amount of model parameters. LM4MT can also
easily utilize source-side texts as additional supervision. Though modeling the
source- and target-language texts with the same mechanism, LM4MT can provide
unified representations for both source and target sentences, which can better
transfer knowledge across languages. Extensive experiments on pivot-based and
zero-shot translation tasks show that LM4MT can outperform the encoder-decoder
NMT model by a large margin.
Related papers
- TransLLaMa: LLM-based Simultaneous Translation System [18.27477980076409]
We show that a Decoder-only large language model (LLMs) can control input segmentation directly by generating a special "wait" token.
This obviates the need for a separate policy and enables the LLM to perform English-German and English-Russian SiMT tasks.
We also evaluated closed-source models such as GPT-4, which displayed encouraging results in performing the SiMT task without prior training.
arXiv Detail & Related papers (2024-02-07T07:39:27Z) - Unified Model Learning for Various Neural Machine Translation [63.320005222549646]
Existing machine translation (NMT) studies mainly focus on developing dataset-specific models.
We propose a versatile'' model, i.e., the Unified Model Learning for NMT (UMLNMT) that works with data from different tasks.
OurNMT results in substantial improvements over dataset-specific models with significantly reduced model deployment costs.
arXiv Detail & Related papers (2023-05-04T12:21:52Z) - Towards Making the Most of Multilingual Pretraining for Zero-Shot Neural
Machine Translation [74.158365847236]
SixT++ is a strong many-to-English NMT model that supports 100 source languages but is trained once with a parallel dataset from only six source languages.
It significantly outperforms CRISS and m2m-100, two strong multilingual NMT systems, with an average gain of 7.2 and 5.0 BLEU respectively.
arXiv Detail & Related papers (2021-10-16T10:59:39Z) - Exploring Unsupervised Pretraining Objectives for Machine Translation [99.5441395624651]
Unsupervised cross-lingual pretraining has achieved strong results in neural machine translation (NMT)
Most approaches adapt masked-language modeling (MLM) to sequence-to-sequence architectures, by masking parts of the input and reconstructing them in the decoder.
We compare masking with alternative objectives that produce inputs resembling real (full) sentences, by reordering and replacing words based on their context.
arXiv Detail & Related papers (2021-06-10T10:18:23Z) - Zero-shot Cross-lingual Transfer of Neural Machine Translation with
Multilingual Pretrained Encoders [74.89326277221072]
How to improve the cross-lingual transfer of NMT model with multilingual pretrained encoder is under-explored.
We propose SixT, a simple yet effective model for this task.
Our model achieves better performance on many-to-English testsets than CRISS and m2m-100.
arXiv Detail & Related papers (2021-04-18T07:42:45Z) - Improving Zero-shot Neural Machine Translation on Language-specific
Encoders-Decoders [19.44855809470709]
Recently, universal neural machine translation (NMT) with shared encoder-decoder gained good performance on zero-shot translation.
Unlike universal NMT, jointly trained language-specific encoders-decoders aim to achieve universal representation across non-shared modules.
We study zero-shot translation using language-specific encoders-decoders.
arXiv Detail & Related papers (2021-02-12T15:36:33Z) - Pre-training Multilingual Neural Machine Translation by Leveraging
Alignment Information [72.2412707779571]
mRASP is an approach to pre-train a universal multilingual neural machine translation model.
We carry out experiments on 42 translation directions across a diverse setting, including low, medium, rich resource, and as well as transferring to exotic language pairs.
arXiv Detail & Related papers (2020-10-07T03:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.