Romanization-based Large-scale Adaptation of Multilingual Language
Models
- URL: http://arxiv.org/abs/2304.08865v1
- Date: Tue, 18 Apr 2023 09:58:34 GMT
- Title: Romanization-based Large-scale Adaptation of Multilingual Language
Models
- Authors: Sukannya Purkayastha, Sebastian Ruder, Jonas Pfeiffer, Iryna Gurevych,
Ivan Vuli\'c
- Abstract summary: Large multilingual pretrained language models (mPLMs) have become the de facto state of the art for cross-lingual transfer in NLP.
We study and compare a plethora of data- and parameter-efficient strategies for adapting the mPLMs to romanized and non-romanized corpora of 14 diverse low-resource languages.
Our results reveal that UROMAN-based transliteration can offer strong performance for many languages, with particular gains achieved in the most challenging setups.
- Score: 124.57923286144515
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large multilingual pretrained language models (mPLMs) have become the de
facto state of the art for cross-lingual transfer in NLP. However, their
large-scale deployment to many languages, besides pretraining data scarcity, is
also hindered by the increase in vocabulary size and limitations in their
parameter budget. In order to boost the capacity of mPLMs to deal with
low-resource and unseen languages, we explore the potential of leveraging
transliteration on a massive scale. In particular, we explore the UROMAN
transliteration tool, which provides mappings from UTF-8 to Latin characters
for all the writing systems, enabling inexpensive romanization for virtually
any language. We first focus on establishing how UROMAN compares against other
language-specific and manually curated transliterators for adapting
multilingual PLMs. We then study and compare a plethora of data- and
parameter-efficient strategies for adapting the mPLMs to romanized and
non-romanized corpora of 14 diverse low-resource languages. Our results reveal
that UROMAN-based transliteration can offer strong performance for many
languages, with particular gains achieved in the most challenging setups: on
languages with unseen scripts and with limited training data without any
vocabulary augmentation. Further analyses reveal that an improved tokenizer
based on romanized data can even outperform non-transliteration-based methods
in the majority of languages.
Related papers
- Trans-Tokenization and Cross-lingual Vocabulary Transfers: Language Adaptation of LLMs for Low-Resource NLP [13.662528492286528]
We present a novel cross-lingual vocabulary transfer strategy, trans-tokenization, designed to tackle this challenge and enable more efficient language adaptation.
Our approach focuses on adapting a high-resource monolingual LLM to an unseen target language by initializing the token embeddings of the target language using a weighted average of semantically similar token embeddings from the source language.
We introduce Hydra LLMs, models with multiple swappable language modeling heads and embedding tables, which further extend the capabilities of our trans-tokenization strategy.
arXiv Detail & Related papers (2024-08-08T08:37:28Z) - Breaking the Script Barrier in Multilingual Pre-Trained Language Models with Transliteration-Based Post-Training Alignment [50.27950279695363]
The transfer performance is often hindered when a low-resource target language is written in a different script than the high-resource source language.
Inspired by recent work that uses transliteration to address this problem, our paper proposes a transliteration-based post-pretraining alignment (PPA) method.
arXiv Detail & Related papers (2024-06-28T08:59:24Z) - Towards a More Inclusive AI: Progress and Perspectives in Large Language Model Training for the Sámi Language [7.289015788793582]
This work focuses on increasing technological participation for the S'ami language.
We draw the attention of the ML community towards the language modeling problem of Ultra Low Resource (ULR) languages.
We have compiled the available S'ami language resources from the web to create a clean dataset for training language models.
arXiv Detail & Related papers (2024-05-09T13:54:22Z) - MYTE: Morphology-Driven Byte Encoding for Better and Fairer Multilingual Language Modeling [70.34758460372629]
We introduce a new paradigm that encodes the same information with segments of consistent size across diverse languages.
MYTE produces shorter encodings for all 99 analyzed languages.
This, in turn, improves multilingual LM performance and diminishes the perplexity gap throughout diverse languages.
arXiv Detail & Related papers (2024-03-15T21:21:11Z) - Soft Language Clustering for Multilingual Model Pre-training [57.18058739931463]
We propose XLM-P, which contextually retrieves prompts as flexible guidance for encoding instances conditionally.
Our XLM-P enables (1) lightweight modeling of language-invariant and language-specific knowledge across languages, and (2) easy integration with other multilingual pre-training methods.
arXiv Detail & Related papers (2023-06-13T08:08:08Z) - UNKs Everywhere: Adapting Multilingual Language Models to New Scripts [103.79021395138423]
Massively multilingual language models such as multilingual BERT (mBERT) and XLM-R offer state-of-the-art cross-lingual transfer performance on a range of NLP tasks.
Due to their limited capacity and large differences in pretraining data, there is a profound performance gap between resource-rich and resource-poor target languages.
We propose novel data-efficient methods that enable quick and effective adaptation of pretrained multilingual models to such low-resource languages and unseen scripts.
arXiv Detail & Related papers (2020-12-31T11:37:28Z) - Cross-lingual Machine Reading Comprehension with Language Branch
Knowledge Distillation [105.41167108465085]
Cross-lingual Machine Reading (CLMRC) remains a challenging problem due to the lack of large-scale datasets in low-source languages.
We propose a novel augmentation approach named Language Branch Machine Reading (LBMRC)
LBMRC trains multiple machine reading comprehension (MRC) models proficient in individual language.
We devise a multilingual distillation approach to amalgamate knowledge from multiple language branch models to a single model for all target languages.
arXiv Detail & Related papers (2020-10-27T13:12:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.