Active Learning for Neural Machine Translation
- URL: http://arxiv.org/abs/2301.00688v1
- Date: Fri, 30 Dec 2022 17:04:01 GMT
- Title: Active Learning for Neural Machine Translation
- Authors: Neeraj Vashistha, Kriti Singh, Ramakant Shakya
- Abstract summary: We incorporated a technique known Active Learning with the NMT toolkit Joey NMT to reach sufficient accuracy and robust predictions of low-resource language translation.
This work uses transformer-based NMT systems; baseline model (BM), fully trained model (FTM), active learning least confidence based model (ALLCM) and active learning margin sampling based model (ALMSM) when translating English to Hindi.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The machine translation mechanism translates texts automatically between
different natural languages, and Neural Machine Translation (NMT) has gained
attention for its rational context analysis and fluent translation accuracy.
However, processing low-resource languages that lack relevant training
attributes like supervised data is a current challenge for Natural Language
Processing (NLP). We incorporated a technique known Active Learning with the
NMT toolkit Joey NMT to reach sufficient accuracy and robust predictions of
low-resource language translation. With active learning, a semi-supervised
machine learning strategy, the training algorithm determines which unlabeled
data would be the most beneficial for obtaining labels using selected query
techniques. We implemented two model-driven acquisition functions for selecting
the samples to be validated. This work uses transformer-based NMT systems;
baseline model (BM), fully trained model (FTM) , active learning least
confidence based model (ALLCM), and active learning margin sampling based model
(ALMSM) when translating English to Hindi. The Bilingual Evaluation Understudy
(BLEU) metric has been used to evaluate system results. The BLEU scores of BM,
FTM, ALLCM and ALMSM systems are 16.26, 22.56 , 24.54, and 24.20, respectively.
The findings in this paper demonstrate that active learning techniques helps
the model to converge early and improve the overall quality of the translation
system.
Related papers
- Towards Zero-Shot Multimodal Machine Translation [64.9141931372384]
We propose a method to bypass the need for fully supervised data to train multimodal machine translation systems.
Our method, called ZeroMMT, consists in adapting a strong text-only machine translation (MT) model by training it on a mixture of two objectives.
To prove that our method generalizes to languages with no fully supervised training data available, we extend the CoMMuTE evaluation dataset to three new languages: Arabic, Russian and Chinese.
arXiv Detail & Related papers (2024-07-18T15:20:31Z) - TasTe: Teaching Large Language Models to Translate through Self-Reflection [82.83958470745381]
Large language models (LLMs) have exhibited remarkable performance in various natural language processing tasks.
We propose the TasTe framework, which stands for translating through self-reflection.
The evaluation results in four language directions on the WMT22 benchmark reveal the effectiveness of our approach compared to existing methods.
arXiv Detail & Related papers (2024-06-12T17:21:21Z) - Beyond MLE: Investigating SEARNN for Low-Resourced Neural Machine Translation [0.09459165957946088]
This project explored the potential of SEARNN to improve machine translation for low-resourced African languages.
Experiments conducted on translation for English to Igbo, French to ewe, and French to ghomala directions.
We proved that SEARNN is indeed a viable algorithm to effectively train RNNs on machine translation for low-resourced languages.
arXiv Detail & Related papers (2024-05-20T06:28:43Z) - MT-PATCHER: Selective and Extendable Knowledge Distillation from Large Language Models for Machine Translation [61.65537912700187]
Large Language Models (LLM) have demonstrated their strong ability in the field of machine translation (MT)
We propose a framework called MT-Patcher, which transfers knowledge from LLMs to existing MT models in a selective, comprehensive and proactive manner.
arXiv Detail & Related papers (2024-03-14T16:07:39Z) - Statistical Machine Translation for Indic Languages [1.8899300124593648]
This paper canvasses about the development of bilingual Statistical Machine Translation models.
To create the system, MOSES open-source SMT toolkit is explored.
In our experiment, the quality of the translation is evaluated using standard metrics such as BLEU, METEOR, and RIBES.
arXiv Detail & Related papers (2023-01-02T06:23:12Z) - Confidence Based Bidirectional Global Context Aware Training Framework
for Neural Machine Translation [74.99653288574892]
We propose a Confidence Based Bidirectional Global Context Aware (CBBGCA) training framework for neural machine translation (NMT)
Our proposed CBBGCA training framework significantly improves the NMT model by +1.02, +1.30 and +0.57 BLEU scores on three large-scale translation datasets.
arXiv Detail & Related papers (2022-02-28T10:24:22Z) - Learning Domain Specific Language Models for Automatic Speech
Recognition through Machine Translation [0.0]
We use Neural Machine Translation as an intermediate step to first obtain translations of task-specific text data.
We develop a procedure to derive word confusion networks from NMT beam search graphs.
We demonstrate that NMT confusion networks can help to reduce the perplexity of both n-gram and recurrent neural network LMs.
arXiv Detail & Related papers (2021-09-21T10:29:20Z) - Self-supervised and Supervised Joint Training for Resource-rich Machine
Translation [30.502625878505732]
Self-supervised pre-training of text representations has been successfully applied to low-resource Neural Machine Translation (NMT)
We propose a joint training approach, $F$-XEnDec, to combine self-supervised and supervised learning to optimize NMT models.
arXiv Detail & Related papers (2021-06-08T02:35:40Z) - Pre-training Multilingual Neural Machine Translation by Leveraging
Alignment Information [72.2412707779571]
mRASP is an approach to pre-train a universal multilingual neural machine translation model.
We carry out experiments on 42 translation directions across a diverse setting, including low, medium, rich resource, and as well as transferring to exotic language pairs.
arXiv Detail & Related papers (2020-10-07T03:57:54Z) - Multi-task Learning for Multilingual Neural Machine Translation [32.81785430242313]
We propose a multi-task learning framework that jointly trains the model with the translation task on bitext data and two denoising tasks on the monolingual data.
We show that the proposed approach can effectively improve the translation quality for both high-resource and low-resource languages.
arXiv Detail & Related papers (2020-10-06T06:54:12Z) - Language Model Prior for Low-Resource Neural Machine Translation [85.55729693003829]
We propose a novel approach to incorporate a LM as prior in a neural translation model (TM)
We add a regularization term, which pushes the output distributions of the TM to be probable under the LM prior.
Results on two low-resource machine translation datasets show clear improvements even with limited monolingual data.
arXiv Detail & Related papers (2020-04-30T16:29:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.