Character-Level Translation with Self-attention
- URL: http://arxiv.org/abs/2004.14788v1
- Date: Thu, 30 Apr 2020 14:05:26 GMT
- Title: Character-Level Translation with Self-attention
- Authors: Yingqiang Gao, Nikola I. Nikolov, Yuhuang Hu, Richard H.R. Hahnloser
- Abstract summary: We explore the suitability of self-attention models for character-level neural machine translation.
We test the standard transformer model and a novel variant in which the encoder block combines information from nearby characters using convolutions.
Our transformer variant consistently outperforms the standard transformer at the character-level and converges faster while learning more robust character-level alignments.
- Score: 9.864260997723974
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore the suitability of self-attention models for character-level
neural machine translation. We test the standard transformer model, as well as
a novel variant in which the encoder block combines information from nearby
characters using convolutions. We perform extensive experiments on WMT and UN
datasets, testing both bilingual and multilingual translation to English using
up to three input languages (French, Spanish, and Chinese). Our transformer
variant consistently outperforms the standard transformer at the
character-level and converges faster while learning more robust character-level
alignments.
Related papers
- Character-level NMT and language similarity [1.90365714903665]
We explore the effectiveness of character-level neural machine translation for various levels of language similarity and size of the training dataset on translation between Czech and Croatian, German, Hungarian, Slovak, and Spanish.
We evaluate the models using automatic MT metrics and show that translation between similar languages benefits from character-level input segmentation.
We confirm previous findings that it is possible to close the gap by finetuning the already trained subword-level models to character-level.
arXiv Detail & Related papers (2023-08-08T17:01:42Z) - Sentence Bottleneck Autoencoders from Transformer Language Models [53.350633961266375]
We build a sentence-level autoencoder from a pretrained, frozen transformer language model.
We adapt the masked language modeling objective as a generative, denoising one, while only training a sentence bottleneck and a single-layer modified transformer decoder.
We demonstrate that the sentence representations discovered by our model achieve better quality than previous methods that extract representations from pretrained transformers on text similarity tasks, style transfer, and single-sentence classification tasks in the GLUE benchmark, while using fewer parameters than large pretrained models.
arXiv Detail & Related papers (2021-08-31T19:39:55Z) - Rethinking Document-level Neural Machine Translation [73.42052953710605]
We try to answer the question: Is the capacity of current models strong enough for document-level translation?
We observe that the original Transformer with appropriate training techniques can achieve strong results for document translation, even with a length of 2000 words.
arXiv Detail & Related papers (2020-10-18T11:18:29Z) - Improving Sentiment Analysis over non-English Tweets using Multilingual
Transformers and Automatic Translation for Data-Augmentation [77.69102711230248]
We propose the use of a multilingual transformer model, that we pre-train over English tweets and apply data-augmentation using automatic translation to adapt the model to non-English languages.
Our experiments in French, Spanish, German and Italian suggest that the proposed technique is an efficient way to improve the results of the transformers over small corpora of tweets in a non-English language.
arXiv Detail & Related papers (2020-10-07T15:44:55Z) - Character-level Transformer-based Neural Machine Translation [5.699756532377753]
We discuss a novel, Transformer-based approach, that we compare, both in speed and in quality to the Transformer at subword and character levels.
We evaluate our models on 4 language pairs from WMT'15: DE-EN, CS-EN, FI-EN and RU-EN.
The proposed novel architecture can be trained on a single GPU and is 34% percent faster than the character-level Transformer.
arXiv Detail & Related papers (2020-05-22T15:40:43Z) - Applying the Transformer to Character-level Transduction [68.91664610425114]
The transformer has been shown to outperform recurrent neural network-based sequence-to-sequence models in various word-level NLP tasks.
We show that with a large enough batch size, the transformer does indeed outperform recurrent models for character-level tasks.
arXiv Detail & Related papers (2020-05-20T17:25:43Z) - Segatron: Segment-Aware Transformer for Language Modeling and
Understanding [79.84562707201323]
We propose a segment-aware Transformer (Segatron) to generate better contextual representations from sequential tokens.
We first introduce the segment-aware mechanism to Transformer-XL, which is a popular Transformer-based language model.
We find that our method can further improve the Transformer-XL base model and large model, achieving 17.1 perplexity on the WikiText-103 dataset.
arXiv Detail & Related papers (2020-04-30T17:38:27Z) - Enriching the Transformer with Linguistic Factors for Low-Resource
Machine Translation [2.2344764434954256]
This study proposes enhancing the current state-of-the-art neural machine translation architecture, the Transformer.
In particular, our proposed modification, the Factored Transformer, uses linguistic factors that insert additional knowledge into the machine translation system.
We show improvements of 0.8 BLEU over the baseline Transformer in the IWSLT German-to-English task.
arXiv Detail & Related papers (2020-04-17T03:40:13Z) - Fixed Encoder Self-Attention Patterns in Transformer-Based Machine
Translation [73.11214377092121]
We propose to replace all but one attention head of each encoder layer with simple fixed -- non-learnable -- attentive patterns.
Our experiments with different data sizes and multiple language pairs show that fixing the attention heads on the encoder side of the Transformer at training time does not impact the translation quality.
arXiv Detail & Related papers (2020-02-24T13:53:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.