FFR V1.0: Fon-French Neural Machine Translation
- URL: http://arxiv.org/abs/2003.12111v1
- Date: Thu, 26 Mar 2020 19:01:31 GMT
- Title: FFR V1.0: Fon-French Neural Machine Translation
- Authors: Bonaventure F. P. Dossou and Chris C. Emezue
- Abstract summary: Africa has the highest linguistic diversity in the world.
The low-resources, diacritical and tonal complexities of African languages are major issues facing African NLP today.
This paper describes our pilot project: the creation of a large growing corpora for Fon-to-French translations and our FFR v1.0 model, trained on this dataset.
- Score: 0.012691047660244334
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Africa has the highest linguistic diversity in the world. On account of the
importance of language to communication, and the importance of reliable,
powerful and accurate machine translation models in modern inter-cultural
communication, there have been (and still are) efforts to create
state-of-the-art translation models for the many African languages. However,
the low-resources, diacritical and tonal complexities of African languages are
major issues facing African NLP today. The FFR is a major step towards creating
a robust translation model from Fon, a very low-resource and tonal language, to
French, for research and public use. In this paper, we describe our pilot
project: the creation of a large growing corpora for Fon-to-French translations
and our FFR v1.0 model, trained on this dataset. The dataset and model are made
publicly available.
Related papers
- Lugha-Llama: Adapting Large Language Models for African Languages [48.97516583523523]
Large language models (LLMs) have achieved impressive results in a wide range of natural language applications.
We consider how to adapt LLMs to low-resource African languages.
We find that combining curated data from African languages with high-quality English educational texts results in a training mix that substantially improves the model's performance on these languages.
arXiv Detail & Related papers (2025-04-09T02:25:53Z) - AfroBench: How Good are Large Language Models on African Languages? [55.35674466745322]
AfroBench is a benchmark for evaluating the performance of LLMs across 64 African languages.
AfroBench consists of nine natural language understanding datasets, six text generation datasets, six knowledge and question answering tasks, and one mathematical reasoning task.
arXiv Detail & Related papers (2023-11-14T08:10:14Z) - Ngambay-French Neural Machine Translation (sba-Fr) [16.55378462843573]
In Africa, and the world at large, there is an increasing focus on developing Neural Machine Translation (NMT) systems to overcome language barriers.
In this project, we created the first sba-Fr dataset, which is a corpus of Ngambay-to-French translations.
Our experiments show that the M2M100 model outperforms other models with high BLEU scores on both original and original+synthetic data.
arXiv Detail & Related papers (2023-08-25T17:13:20Z) - Neural Machine Translation for the Indigenous Languages of the Americas:
An Introduction [102.13536517783837]
Most languages from the Americas are among them, having a limited amount of parallel and monolingual data, if any.
We discuss the recent advances and findings and open questions, product of an increased interest of the NLP community in these languages.
arXiv Detail & Related papers (2023-06-11T23:27:47Z) - How Good are Commercial Large Language Models on African Languages? [0.012691047660244334]
We present a preliminary analysis of commercial large language models on two tasks (machine translation and text classification) across eight African languages.
Our results suggest that commercial language models produce below-par performance on African languages.
In general, our findings present a call-to-action to ensure African languages are well represented in commercial large language models.
arXiv Detail & Related papers (2023-05-11T02:29:53Z) - Transfer to a Low-Resource Language via Close Relatives: The Case Study
on Faroese [54.00582760714034]
Cross-lingual NLP transfer can be improved by exploiting data and models of high-resource languages.
We release a new web corpus of Faroese and Faroese datasets for named entity recognition (NER), semantic text similarity (STS) and new language models trained on all Scandinavian languages.
arXiv Detail & Related papers (2023-04-18T08:42:38Z) - AfroLM: A Self-Active Learning-based Multilingual Pretrained Language
Model for 23 African Languages [0.021987601456703476]
We present AfroLM, a multilingual language model pretrained from scratch on 23 African languages.
AfroLM is pretrained on a dataset 14x smaller than existing baselines.
It is able to generalize well across various domains.
arXiv Detail & Related papers (2022-11-07T02:15:25Z) - MasakhaNER 2.0: Africa-centric Transfer Learning for Named Entity
Recognition [55.95128479289923]
African languages are spoken by over a billion people, but are underrepresented in NLP research and development.
We create the largest human-annotated NER dataset for 20 African languages.
We show that choosing the best transfer language improves zero-shot F1 scores by an average of 14 points.
arXiv Detail & Related papers (2022-10-22T08:53:14Z) - English2Gbe: A multilingual machine translation model for {Fon/Ewe}Gbe [0.0]
This paper introduces English2Gbe, a multilingual neural machine translation model capable of translating from English to Ewe or Fon.
We show that English2Gbe outperforms bilingual models (English to Ewe and English Fon) and gives state-of-the-art results on the JW300 benchmark for Fon.
arXiv Detail & Related papers (2021-12-13T10:35:09Z) - AfroMT: Pretraining Strategies and Reproducible Benchmarks for
Translation of 8 African Languages [94.75849612191546]
AfroMT is a standardized, clean, and reproducible machine translation benchmark for eight widely spoken African languages.
We develop a suite of analysis tools for system diagnosis taking into account the unique properties of these languages.
We demonstrate significant improvements when pretraining on 11 languages, with gains of up to 2 BLEU points over strong baselines.
arXiv Detail & Related papers (2021-09-10T07:45:21Z) - MasakhaNER: Named Entity Recognition for African Languages [48.34339599387944]
We create the first large publicly available high-quality dataset for named entity recognition in ten African languages.
We detail characteristics of the languages to help researchers understand the challenges that these languages pose for NER.
arXiv Detail & Related papers (2021-03-22T13:12:44Z) - Beyond English-Centric Multilingual Machine Translation [74.21727842163068]
We create a true Many-to-Many multilingual translation model that can translate directly between any pair of 100 languages.
We build and open source a training dataset that covers thousands of language directions with supervised data, created through large-scale mining.
Our focus on non-English-Centric models brings gains of more than 10 BLEU when directly translating between non-English directions while performing competitively to the best single systems of WMT.
arXiv Detail & Related papers (2020-10-21T17:01:23Z) - FFR v1.1: Fon-French Neural Machine Translation [0.012691047660244334]
FFR project is a major step towards creating a robust translation model from Fon, a very low-resource and tonal language, to French.
In this paper, we introduce FFR dataset, a corpus of Fon-to-French translations, describe the diacritical encoding process, and introduce our FFR v1.1 model.
arXiv Detail & Related papers (2020-06-14T04:27:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.