Findings of the Covid-19 MLIA Machine Translation Task
- URL: http://arxiv.org/abs/2211.07465v1
- Date: Mon, 14 Nov 2022 15:47:53 GMT
- Title: Findings of the Covid-19 MLIA Machine Translation Task
- Authors: Francisco Casacuberta, Alexandru Ceausu, Khalid Choukri, Miltos
Deligiannis, Miguel Domingo, Mercedes Garc\'ia-Mart\'inez, Manuel Herranz,
Guillaume Jacquet, Vassilis Papavassiliou, Stelios Piperidis, Prokopis
Prokopidis, Dimitris Roussis, and Marwa Hadj Salah
- Abstract summary: This work presents the results of the machine translation (MT) task from the Covid-19 MLIA @ Eval initiative.
Nine teams took part in this event, which was divided in two rounds and involved seven different language pairs.
Overall, best approaches were based on multilingual models and transfer learning, with an emphasis on the importance of applying a cleaning process to the training data.
- Score: 45.8251505347275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work presents the results of the machine translation (MT) task from the
Covid-19 MLIA @ Eval initiative, a community effort to improve the generation
of MT systems focused on the current Covid-19 crisis. Nine teams took part in
this event, which was divided in two rounds and involved seven different
language pairs. Two different scenarios were considered: one in which only the
provided data was allowed, and a second one in which the use of external
resources was allowed. Overall, best approaches were based on multilingual
models and transfer learning, with an emphasis on the importance of applying a
cleaning process to the training data.
Related papers
- TasTe: Teaching Large Language Models to Translate through Self-Reflection [82.83958470745381]
Large language models (LLMs) have exhibited remarkable performance in various natural language processing tasks.
We propose the TasTe framework, which stands for translating through self-reflection.
The evaluation results in four language directions on the WMT22 benchmark reveal the effectiveness of our approach compared to existing methods.
arXiv Detail & Related papers (2024-06-12T17:21:21Z) - AAdaM at SemEval-2024 Task 1: Augmentation and Adaptation for Multilingual Semantic Textual Relatedness [16.896143197472114]
This paper presents our system developed for the SemEval-2024 Task 1: Semantic Textual Relatedness for African and Asian languages.
We propose using machine translation for data augmentation to address the low-resource challenge of limited training data.
We achieve competitive results in the shared task: our system performs the best among all ranked teams in both subtask A (supervised learning) and subtask C (cross-lingual transfer)
arXiv Detail & Related papers (2024-04-01T21:21:15Z) - UvA-MT's Participation in the WMT23 General Translation Shared Task [7.4336950563281174]
This paper describes the UvA-MT's submission to the WMT 2023 shared task on general machine translation.
We show that by using one model to handle bidirectional tasks, it is possible to achieve comparable results with that of traditional bilingual translation for both directions.
arXiv Detail & Related papers (2023-10-15T20:49:31Z) - Building Multilingual Machine Translation Systems That Serve Arbitrary
X-Y Translations [75.73028056136778]
We show how to practically build MNMT systems that serve arbitrary X-Y translation directions.
We also examine our proposed approach in an extremely large-scale data setting to accommodate practical deployment scenarios.
arXiv Detail & Related papers (2022-06-30T02:18:15Z) - Multilingual Machine Translation Systems from Microsoft for WMT21 Shared
Task [95.06453182273027]
This report describes Microsoft's machine translation systems for the WMT21 shared task on large-scale multilingual machine translation.
Our model submissions to the shared task were with DeltaLMnotefooturlhttps://aka.ms/deltalm, a generic pre-trained multilingual-decoder model.
Our final submissions ranked first on three tracks in terms of the automatic evaluation metric.
arXiv Detail & Related papers (2021-11-03T09:16:17Z) - Netmarble AI Center's WMT21 Automatic Post-Editing Shared Task
Submission [6.043109546012043]
This paper describes Netmarble's submission to WMT21 Automatic Post-Editing (APE) Shared Task for the English-German language pair.
Facebook Fair's WMT19 news translation model was chosen to engage the large and powerful pre-trained neural networks.
For better performance, we leverage external translations as augmented machine translation (MT) during the post-training and fine-tuning.
arXiv Detail & Related papers (2021-09-14T08:21:18Z) - Zero-shot Cross-lingual Transfer of Neural Machine Translation with
Multilingual Pretrained Encoders [74.89326277221072]
How to improve the cross-lingual transfer of NMT model with multilingual pretrained encoder is under-explored.
We propose SixT, a simple yet effective model for this task.
Our model achieves better performance on many-to-English testsets than CRISS and m2m-100.
arXiv Detail & Related papers (2021-04-18T07:42:45Z) - SJTU-NICT's Supervised and Unsupervised Neural Machine Translation
Systems for the WMT20 News Translation Task [111.91077204077817]
We participated in four translation directions of three language pairs: English-Chinese, English-Polish, and German-Upper Sorbian.
Based on different conditions of language pairs, we have experimented with diverse neural machine translation (NMT) techniques.
In our submissions, the primary systems won the first place on English to Chinese, Polish to English, and German to Upper Sorbian translation directions.
arXiv Detail & Related papers (2020-10-11T00:40:05Z) - UPB at SemEval-2020 Task 9: Identifying Sentiment in Code-Mixed Social
Media Texts using Transformers and Multi-Task Learning [1.7196613099537055]
We describe the systems developed by our team for SemEval-2020 Task 9.
We aim to cover two well-known code-mixed languages: Hindi-English and Spanish-English.
Our approach achieves promising performance on the Hindi-English task, with an average F1-score of 0.6850.
For the Spanish-English task, we obtained an average F1-score of 0.7064 ranking our team 17th out of 29 participants.
arXiv Detail & Related papers (2020-09-06T17:19:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.