GreenPLM: Cross-Lingual Transfer of Monolingual Pre-Trained Language
Models at Almost No Cost
- URL: http://arxiv.org/abs/2211.06993v3
- Date: Fri, 26 May 2023 13:28:36 GMT
- Title: GreenPLM: Cross-Lingual Transfer of Monolingual Pre-Trained Language
Models at Almost No Cost
- Authors: Qingcheng Zeng, Lucas Garay, Peilin Zhou, Dading Chong, Yining Hua,
Jiageng Wu, Yikang Pan, Han Zhou, Rob Voigt, Jie Yang
- Abstract summary: This study proposes a framework called GreenPLM that uses bilingual lexicons to directly "translate" pre-trained language models into another language.
We validate this approach in 18 languages' BERT models and show that this framework is comparable to, if not better than, other frameworks with high training costs.
In six out of seven tested languages, this framework outperforms the original monolingual language models with up to 200x less pre-training efforts.
- Score: 7.510253441699812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large pre-trained models have revolutionized natural language processing
(NLP) research and applications, but high training costs and limited data
resources have prevented their benefits from being shared equally amongst
speakers of all the world's languages. To address issues of cross-linguistic
access to such models and reduce energy consumption for sustainability during
large-scale model training, this study proposes an effective and
energy-efficient framework called GreenPLM that uses bilingual lexicons to
directly "translate" pre-trained language models of one language into another
at almost no additional cost. We validate this approach in 18 languages' BERT
models and show that this framework is comparable to, if not better than, other
heuristics with high training costs. In addition, given lightweight continued
pre-training on limited data where available, this framework outperforms the
original monolingual language models in six out of seven tested languages with
up to 200x less pre-training efforts. Aiming at the Leave No One Behind
Principle (LNOB), our approach manages to reduce inequalities between languages
and energy consumption greatly. We make our codes and models publicly available
here: \url{https://github.com/qcznlp/GreenPLMs}
Related papers
- Open Generative Large Language Models for Galician [1.3049334790726996]
Large language models (LLMs) have transformed natural language processing.
Yet, their predominantly English-centric training has led to biases and performance disparities across languages.
This imbalance marginalizes minoritized languages, making equitable access to NLP technologies more difficult for languages with lower resources, such as Galician.
We present the first two generative LLMs focused on Galician to bridge this gap.
arXiv Detail & Related papers (2024-06-19T23:49:56Z) - MoSECroT: Model Stitching with Static Word Embeddings for Crosslingual Zero-shot Transfer [50.40191599304911]
We introduce MoSECroT Model Stitching with Static Word Embeddings for Crosslingual Zero-shot Transfer.
In this paper, we present the first framework that leverages relative representations to construct a common space for the embeddings of a source language PLM and the static word embeddings of a target language.
We show that although our proposed framework is competitive with weak baselines when addressing MoSECroT, it fails to achieve competitive results compared with some strong baselines.
arXiv Detail & Related papers (2024-01-09T21:09:07Z) - Soft Language Clustering for Multilingual Model Pre-training [57.18058739931463]
We propose XLM-P, which contextually retrieves prompts as flexible guidance for encoding instances conditionally.
Our XLM-P enables (1) lightweight modeling of language-invariant and language-specific knowledge across languages, and (2) easy integration with other multilingual pre-training methods.
arXiv Detail & Related papers (2023-06-13T08:08:08Z) - Cross-Lingual Transfer Learning for Phrase Break Prediction with
Multilingual Language Model [13.730152819942445]
Cross-lingual transfer learning can be particularly effective for improving performance in low-resource languages.
This suggests that cross-lingual transfer can be inexpensive and effective for developing TTS front-end in resource-poor languages.
arXiv Detail & Related papers (2023-06-05T04:10:04Z) - Efficient Language Model Training through Cross-Lingual and Progressive
Transfer Learning [0.7612676127275795]
Most Transformer language models are pretrained on English text.
As model sizes grow, the performance gap between English and other languages increases even further.
We introduce a cross-lingual and progressive transfer learning approach, called CLP-Transfer.
arXiv Detail & Related papers (2023-01-23T18:56:12Z) - Generalizing Multimodal Pre-training into Multilingual via Language
Acquisition [54.69707237195554]
English-based Vision-Language Pre-training has achieved great success in various downstream tasks.
Some efforts have been taken to generalize this success to non-English languages through Multilingual Vision-Language Pre-training.
We propose a textbfMultitextbfLingual textbfAcquisition (MLA) framework that can easily generalize a monolingual Vision-Language Pre-training model into multilingual.
arXiv Detail & Related papers (2022-05-29T08:53:22Z) - UNKs Everywhere: Adapting Multilingual Language Models to New Scripts [103.79021395138423]
Massively multilingual language models such as multilingual BERT (mBERT) and XLM-R offer state-of-the-art cross-lingual transfer performance on a range of NLP tasks.
Due to their limited capacity and large differences in pretraining data, there is a profound performance gap between resource-rich and resource-poor target languages.
We propose novel data-efficient methods that enable quick and effective adaptation of pretrained multilingual models to such low-resource languages and unseen scripts.
arXiv Detail & Related papers (2020-12-31T11:37:28Z) - Cross-lingual Machine Reading Comprehension with Language Branch
Knowledge Distillation [105.41167108465085]
Cross-lingual Machine Reading (CLMRC) remains a challenging problem due to the lack of large-scale datasets in low-source languages.
We propose a novel augmentation approach named Language Branch Machine Reading (LBMRC)
LBMRC trains multiple machine reading comprehension (MRC) models proficient in individual language.
We devise a multilingual distillation approach to amalgamate knowledge from multiple language branch models to a single model for all target languages.
arXiv Detail & Related papers (2020-10-27T13:12:17Z) - Parsing with Multilingual BERT, a Small Corpus, and a Small Treebank [46.626315158735615]
Pretrained multilingual contextual representations have shown great success, but due to the limits of their pretraining data, their benefits do not apply equally to all language varieties.
This presents a challenge for language varieties unfamiliar to these models, whose labeled emphand unlabeled data is too limited to train a monolingual model effectively.
We propose the use of additional language-specific pretraining and vocabulary augmentation to adapt multilingual models to low-resource settings.
arXiv Detail & Related papers (2020-09-29T16:12:52Z) - From English To Foreign Languages: Transferring Pre-trained Language
Models [0.12691047660244334]
Pre-trained models have demonstrated their effectiveness in many downstream natural language processing (NLP) tasks.
The availability of multilingual pre-trained models enables zero-shot transfer of NLP tasks from high resource languages to low resource ones.
We tackle the problem of transferring an existing pre-trained model from English to other languages under a limited computational budget.
arXiv Detail & Related papers (2020-02-18T00:22:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.