PEFTT: Parameter-Efficient Fine-Tuning for low-resource Tibetan
pre-trained language models
- URL: http://arxiv.org/abs/2309.12109v1
- Date: Thu, 21 Sep 2023 14:29:23 GMT
- Title: PEFTT: Parameter-Efficient Fine-Tuning for low-resource Tibetan
pre-trained language models
- Authors: Zhou Mingjun, Daiqing Zhuoma, Qun Nuo, Nyima Tashi
- Abstract summary: There is currently no existing large language model for Tibetan due to its low-resource nature.
We conducted three types of efficient fine-tuning experiments on the publicly available TNCC-title dataset.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this era of large language models (LLMs), the traditional training of
models has become increasingly unimaginable for regular users and institutions.
The exploration of efficient fine-tuning for high-resource languages on these
models is an undeniable trend that is gradually gaining popularity. However,
there has been very little exploration for various low-resource languages, such
as Tibetan. Research in Tibetan NLP is inherently scarce and limited. While
there is currently no existing large language model for Tibetan due to its
low-resource nature, that day will undoubtedly arrive. Therefore, research on
efficient fine-tuning for low-resource language models like Tibetan is highly
necessary. Our research can serve as a reference to fill this crucial gap.
Efficient fine-tuning strategies for pre-trained language models (PLMs) in
Tibetan have seen minimal exploration. We conducted three types of efficient
fine-tuning experiments on the publicly available TNCC-title dataset:
"prompt-tuning," "Adapter lightweight fine-tuning," and "prompt-tuning +
Adapter fine-tuning." The experimental results demonstrate significant
improvements using these methods, providing valuable insights for advancing
Tibetan language applications in the context of pre-trained models.
Related papers
- Investigating Recent Large Language Models for Vietnamese Machine Reading Comprehension [1.456352735394398]
We fine-tune and evaluate two state-of-the-art Large Language Models (LLMs) on ViMMRC, a Vietnamese MRC dataset.
Although our fine-tuned models are smaller than GPT-3 and GPT-3.5, they outperform both traditional BERT-based approaches and these larger models.
arXiv Detail & Related papers (2025-03-23T13:08:11Z) - Enhancing Code Generation for Low-Resource Languages: No Silver Bullet [55.39571645315926]
Large Language Models (LLMs) rely on large and diverse datasets to learn syntax, semantics, and usage patterns of programming languages.
For low-resource languages, the limited availability of such data hampers the models' ability to generalize effectively.
We present an empirical study investigating the effectiveness of several approaches for boosting LLMs' performance on low-resource languages.
arXiv Detail & Related papers (2025-01-31T12:23:28Z) - Leveraging Parameter Efficient Training Methods for Low Resource Text Classification: A Case Study in Marathi [0.4194295877935868]
We present a study of PEFT methods for the Indic low-resource language Marathi.
These approaches are evaluated on prominent text classification datasets like MahaSent, MahaHate, and MahaNews.
We show that these methods are competitive with full fine-tuning and can be used without loss in accuracy.
arXiv Detail & Related papers (2024-08-06T13:16:16Z) - Transferring BERT Capabilities from High-Resource to Low-Resource
Languages Using Vocabulary Matching [1.746529892290768]
This work presents a novel approach to transfer BERT capabilities from high-resource to low-resource languages using vocabulary matching.
We conduct experiments on the Silesian and Kashubian languages and demonstrate the effectiveness of our approach to improve the performance of BERT models even when the target language has minimal training data.
arXiv Detail & Related papers (2024-02-22T09:49:26Z) - Language Model Pre-Training with Sparse Latent Typing [66.75786739499604]
We propose a new pre-training objective, Sparse Latent Typing, which enables the model to sparsely extract sentence-level keywords with diverse latent types.
Experimental results show that our model is able to learn interpretable latent type categories in a self-supervised manner without using any external knowledge.
arXiv Detail & Related papers (2022-10-23T00:37:08Z) - TiBERT: Tibetan Pre-trained Language Model [2.9554549423413303]
This paper collects the large-scale training data from Tibetan websites and constructs a vocabulary that can cover 99.95$%$ of the words in the corpus by using Sentencepiece.
We apply TiBERT to the downstream tasks of text classification and question generation, and compare it with classic models and multilingual pre-trained models.
arXiv Detail & Related papers (2022-05-15T14:45:08Z) - The Importance of Context in Very Low Resource Language Modeling [3.734153902687548]
In very low resource scenarios, statistical n-gram language models outperform state-of-the-art neural models.
We introduce three methods to improve a neural model's performance in the low-resource setting.
arXiv Detail & Related papers (2022-05-10T11:19:56Z) - Fine-Tuning Large Neural Language Models for Biomedical Natural Language
Processing [55.52858954615655]
We conduct a systematic study on fine-tuning stability in biomedical NLP.
We show that finetuning performance may be sensitive to pretraining settings, especially in low-resource domains.
We show that these techniques can substantially improve fine-tuning performance for lowresource biomedical NLP applications.
arXiv Detail & Related papers (2021-12-15T04:20:35Z) - bert2BERT: Towards Reusable Pretrained Language Models [51.078081486422896]
We propose bert2BERT, which can effectively transfer the knowledge of an existing smaller pre-trained model to a large model.
bert2BERT saves about 45% and 47% computational cost of pre-training BERT_BASE and GPT_BASE by reusing the models of almost their half sizes.
arXiv Detail & Related papers (2021-10-14T04:05:25Z) - BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based
Masked Language-models [51.53936551681613]
We show that fine-tuning only the bias terms (or a subset of the bias terms) of pre-trained BERT models is competitive with (and sometimes better than) fine-tuning the entire model.
They support the hypothesis that finetuning is mainly about exposing knowledge induced by language-modeling training, rather than learning new task-specific linguistic knowledge.
arXiv Detail & Related papers (2021-06-18T16:09:21Z) - Fine-tuning BERT for Low-Resource Natural Language Understanding via
Active Learning [30.5853328612593]
In this work, we explore fine-tuning methods of BERT -- a pre-trained Transformer based language model.
Our experimental results show an advantage in model performance by maximizing the approximate knowledge gain of the model.
We analyze the benefits of freezing layers of the language model during fine-tuning to reduce the number of trainable parameters.
arXiv Detail & Related papers (2020-12-04T08:34:39Z) - Comparison of Interactive Knowledge Base Spelling Correction Models for
Low-Resource Languages [81.90356787324481]
Spelling normalization for low resource languages is a challenging task because the patterns are hard to predict.
This work shows a comparison of a neural model and character language models with varying amounts on target language data.
Our usage scenario is interactive correction with nearly zero amounts of training examples, improving models as more data is collected.
arXiv Detail & Related papers (2020-10-20T17:31:07Z) - Exploring Fine-tuning Techniques for Pre-trained Cross-lingual Models
via Continual Learning [74.25168207651376]
Fine-tuning pre-trained language models to downstream cross-lingual tasks has shown promising results.
We leverage continual learning to preserve the cross-lingual ability of the pre-trained model when we fine-tune it to downstream tasks.
Our methods achieve better performance than other fine-tuning baselines on the zero-shot cross-lingual part-of-speech tagging and named entity recognition tasks.
arXiv Detail & Related papers (2020-04-29T14:07:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.