BIOptimus: Pre-training an Optimal Biomedical Language Model with
Curriculum Learning for Named Entity Recognition
- URL: http://arxiv.org/abs/2308.08625v1
- Date: Wed, 16 Aug 2023 18:48:01 GMT
- Title: BIOptimus: Pre-training an Optimal Biomedical Language Model with
Curriculum Learning for Named Entity Recognition
- Authors: Pavlova Vera and Mohammed Makhlouf
- Abstract summary: Using language models (LMs) pre-trained in a self-supervised setting on large corpora has helped to deal with the problem of limited label data.
Recent research in biomedical language processing has offered a number of biomedical LMs pre-trained.
This paper aims to investigate different pre-training methods, such as pre-training the biomedical LM from scratch and pre-training it in a continued fashion.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Using language models (LMs) pre-trained in a self-supervised setting on large
corpora and then fine-tuning for a downstream task has helped to deal with the
problem of limited label data for supervised learning tasks such as Named
Entity Recognition (NER). Recent research in biomedical language processing has
offered a number of biomedical LMs pre-trained using different methods and
techniques that advance results on many BioNLP tasks, including NER. However,
there is still a lack of a comprehensive comparison of pre-training approaches
that would work more optimally in the biomedical domain. This paper aims to
investigate different pre-training methods, such as pre-training the biomedical
LM from scratch and pre-training it in a continued fashion. We compare existing
methods with our proposed pre-training method of initializing weights for new
tokens by distilling existing weights from the BERT model inside the context
where the tokens were found. The method helps to speed up the pre-training
stage and improve performance on NER. In addition, we compare how masking rate,
corruption strategy, and masking strategies impact the performance of the
biomedical LM. Finally, using the insights from our experiments, we introduce a
new biomedical LM (BIOptimus), which is pre-trained using Curriculum Learning
(CL) and contextualized weight distillation method. Our model sets new states
of the art on several biomedical Named Entity Recognition (NER) tasks. We
release our code and all pre-trained models
Related papers
- How Important is Domain Specificity in Language Models and Instruction
Finetuning for Biomedical Relation Extraction? [1.7555695340815782]
General-domain models typically outperformed biomedical-domain models.
biomedical instruction finetuning improved performance to a similar degree as general instruction finetuning.
Our findings suggest it may be more fruitful to focus research effort on larger-scale biomedical instruction finetuning of general LMs.
arXiv Detail & Related papers (2024-02-21T01:57:58Z) - Multi-level biomedical NER through multi-granularity embeddings and
enhanced labeling [3.8599767910528917]
This paper proposes a hybrid approach that integrates the strengths of multiple models.
BERT provides contextualized word embeddings, a pre-trained multi-channel CNN for character-level information capture, and following by a BiLSTM + CRF for sequence labelling and modelling dependencies between the words in the text.
We evaluate our model on the benchmark i2b2/2010 dataset, achieving an F1-score of 90.11.
arXiv Detail & Related papers (2023-12-24T21:45:36Z) - Diversifying Knowledge Enhancement of Biomedical Language Models using
Adapter Modules and Knowledge Graphs [54.223394825528665]
We develop an approach that uses lightweight adapter modules to inject structured biomedical knowledge into pre-trained language models.
We use two large KGs, the biomedical knowledge system UMLS and the novel biochemical OntoChem, with two prominent biomedical PLMs, PubMedBERT and BioLinkBERT.
We show that our methodology leads to performance improvements in several instances while keeping requirements in computing power low.
arXiv Detail & Related papers (2023-12-21T14:26:57Z) - Improving Biomedical Entity Linking with Retrieval-enhanced Learning [53.24726622142558]
$k$NN-BioEL provides a BioEL model with the ability to reference similar instances from the entire training corpus as clues for prediction.
We show that $k$NN-BioEL outperforms state-of-the-art baselines on several datasets.
arXiv Detail & Related papers (2023-12-15T14:04:23Z) - UMLS-KGI-BERT: Data-Centric Knowledge Integration in Transformers for
Biomedical Entity Recognition [4.865221751784403]
This work contributes a data-centric paradigm for enriching the language representations of biomedical transformer-encoder LMs by extracting text sequences from the UMLS.
Preliminary results from experiments in the extension of pre-trained LMs as well as training from scratch show that this framework improves downstream performance on multiple biomedical and clinical Named Entity Recognition (NER) tasks.
arXiv Detail & Related papers (2023-07-20T18:08:34Z) - BERT WEAVER: Using WEight AVERaging to enable lifelong learning for
transformer-based models in biomedical semantic search engines [49.75878234192369]
We present WEAVER, a simple, yet efficient post-processing method that infuses old knowledge into the new model.
We show that applying WEAVER in a sequential manner results in similar word embedding distributions as doing a combined training on all data at once.
arXiv Detail & Related papers (2022-02-21T10:34:41Z) - Fine-Tuning Large Neural Language Models for Biomedical Natural Language
Processing [55.52858954615655]
We conduct a systematic study on fine-tuning stability in biomedical NLP.
We show that finetuning performance may be sensitive to pretraining settings, especially in low-resource domains.
We show that these techniques can substantially improve fine-tuning performance for lowresource biomedical NLP applications.
arXiv Detail & Related papers (2021-12-15T04:20:35Z) - Domain-Specific Language Model Pretraining for Biomedical Natural
Language Processing [73.37262264915739]
We show that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains.
Our experiments show that domain-specific pretraining serves as a solid foundation for a wide range of biomedical NLP tasks.
arXiv Detail & Related papers (2020-07-31T00:04:15Z) - Pre-training technique to localize medical BERT and enhance biomedical
BERT [0.0]
It is difficult to train specific BERT models that perform well for domains in which there are few publicly available databases of high quality and large size.
We propose a single intervention with one option: simultaneous pre-training after up-sampling and amplified vocabulary.
Our Japanese medical BERT outperformed conventional baselines and the other BERT models in terms of the medical document classification task.
arXiv Detail & Related papers (2020-05-14T18:00:01Z) - Pre-training Text Representations as Meta Learning [113.3361289756749]
We introduce a learning algorithm which directly optimize model's ability to learn text representations for effective learning of downstream tasks.
We show that there is an intrinsic connection between multi-task pre-training and model-agnostic meta-learning with a sequence of meta-train steps.
arXiv Detail & Related papers (2020-04-12T09:05:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.