Data-Efficient French Language Modeling with CamemBERTa
- URL: http://arxiv.org/abs/2306.01497v1
- Date: Fri, 2 Jun 2023 12:45:34 GMT
- Title: Data-Efficient French Language Modeling with CamemBERTa
- Authors: Wissam Antoun, Beno\^it Sagot, Djam\'e Seddah
- Abstract summary: We introduce CamemBERTa, a French DeBERTa model that builds upon the DeBERTaV3 architecture and training objective.
We evaluate our model's performance on a variety of French downstream tasks and datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent advances in NLP have significantly improved the performance of
language models on a variety of tasks. While these advances are largely driven
by the availability of large amounts of data and computational power, they also
benefit from the development of better training methods and architectures. In
this paper, we introduce CamemBERTa, a French DeBERTa model that builds upon
the DeBERTaV3 architecture and training objective. We evaluate our model's
performance on a variety of French downstream tasks and datasets, including
question answering, part-of-speech tagging, dependency parsing, named entity
recognition, and the FLUE benchmark, and compare against CamemBERT, the
state-of-the-art monolingual model for French. Our results show that, given the
same amount of training tokens, our model outperforms BERT-based models trained
with MLM on most tasks. Furthermore, our new model reaches similar or superior
performance on downstream tasks compared to CamemBERT, despite being trained on
only 30% of its total number of input tokens. In addition to our experimental
results, we also publicly release the weights and code implementation of
CamemBERTa, making it the first publicly available DeBERTaV3 model outside of
the original paper and the first openly available implementation of a DeBERTaV3
training objective. https://gitlab.inria.fr/almanach/CamemBERTa
Related papers
- CamemBERT 2.0: A Smarter French Language Model Aged to Perfection [14.265650708194789]
We introduce two new versions of the CamemBERT base model-CamemBERTav2 and CamemBERTv2-designed to address these challenges.
Both models are trained on a significantly larger and more recent dataset with longer context length and an updated tokenizer.
Our results show that these updated models vastly outperform their predecessors, making them valuable tools for modern NLP systems.
arXiv Detail & Related papers (2024-11-13T18:49:35Z) - MTEB-French: Resources for French Sentence Embedding Evaluation and Analysis [1.5761916307614148]
We propose the first benchmark of sentence embeddings for French.
We compare 51 carefully selected embedding models on a large scale.
We find that even if no model is the best on all tasks, large multilingual models pre-trained on sentence similarity perform exceptionally well.
arXiv Detail & Related papers (2024-05-30T20:34:37Z) - Pre-training Data Quality and Quantity for a Low-Resource Language: New
Corpus and BERT Models for Maltese [4.4681678689625715]
We analyse the effect of pre-training with monolingual data for a low-resource language.
We present a newly created corpus for Maltese, and determine the effect that the pre-training data size and domain have on the downstream performance.
We compare two models on the new corpus: a monolingual BERT model trained from scratch (BERTu), and a further pre-trained multilingual BERT (mBERTu)
arXiv Detail & Related papers (2022-05-21T06:44:59Z) - MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided
Adaptation [68.30497162547768]
We propose MoEBERT, which uses a Mixture-of-Experts structure to increase model capacity and inference speed.
We validate the efficiency and effectiveness of MoEBERT on natural language understanding and question answering tasks.
arXiv Detail & Related papers (2022-04-15T23:19:37Z) - DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with
Gradient-Disentangled Embedding Sharing [117.41016786835452]
This paper presents a new pre-trained language model, DeBERTaV3, which improves the original DeBERTa model.
vanilla embedding sharing in ELECTRA hurts training efficiency and model performance.
We propose a new gradient-disentangled embedding sharing method that avoids the tug-of-war dynamics.
arXiv Detail & Related papers (2021-11-18T06:48:00Z) - PAGnol: An Extra-Large French Generative Model [53.40189314359048]
We introduce PAGnol, a collection of French GPT models.
Using scaling laws, we efficiently train PAGnol-XL with the same computational budget as CamemBERT.
arXiv Detail & Related papers (2021-10-16T11:44:23Z) - bert2BERT: Towards Reusable Pretrained Language Models [51.078081486422896]
We propose bert2BERT, which can effectively transfer the knowledge of an existing smaller pre-trained model to a large model.
bert2BERT saves about 45% and 47% computational cost of pre-training BERT_BASE and GPT_BASE by reusing the models of almost their half sizes.
arXiv Detail & Related papers (2021-10-14T04:05:25Z) - Towards Efficient NLP: A Standard Evaluation and A Strong Baseline [55.29756535335831]
This work presents ELUE (Efficient Language Understanding Evaluation), a standard evaluation, and a public leaderboard for efficient NLP models.
Along with the benchmark, we also pre-train and release a strong baseline, ElasticBERT, whose elasticity is both static and dynamic.
arXiv Detail & Related papers (2021-10-13T21:17:15Z) - ParsBERT: Transformer-based Model for Persian Language Understanding [0.7646713951724012]
This paper proposes a monolingual BERT for the Persian language (ParsBERT)
It shows its state-of-the-art performance compared to other architectures and multilingual models.
ParsBERT obtains higher scores in all datasets, including existing ones as well as composed ones.
arXiv Detail & Related papers (2020-05-26T05:05:32Z) - Revisiting Pre-Trained Models for Chinese Natural Language Processing [73.65780892128389]
We revisit Chinese pre-trained language models to examine their effectiveness in a non-English language.
We also propose a model called MacBERT, which improves upon RoBERTa in several ways.
arXiv Detail & Related papers (2020-04-29T02:08:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.