BioALBERT: A Simple and Effective Pre-trained Language Model for
Biomedical Named Entity Recognition
- URL: http://arxiv.org/abs/2009.09223v1
- Date: Sat, 19 Sep 2020 12:58:47 GMT
- Title: BioALBERT: A Simple and Effective Pre-trained Language Model for
Biomedical Named Entity Recognition
- Authors: Usman Naseem, Matloob Khushi, Vinay Reddy, Sakthivel Rajendran, Imran
Razzak, Jinman Kim
- Abstract summary: Existing BioNER approaches often neglect these issues and directly adopt the state-of-the-art (SOTA) models.
We propose biomedical ALBERT, an effective domain-specific language model trained on large-scale biomedical corpora.
- Score: 9.05154470433578
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, with the growing amount of biomedical documents, coupled
with advancement in natural language processing algorithms, the research on
biomedical named entity recognition (BioNER) has increased exponentially.
However, BioNER research is challenging as NER in the biomedical domain are:
(i) often restricted due to limited amount of training data, (ii) an entity can
refer to multiple types and concepts depending on its context and, (iii) heavy
reliance on acronyms that are sub-domain specific. Existing BioNER approaches
often neglect these issues and directly adopt the state-of-the-art (SOTA)
models trained in general corpora which often yields unsatisfactory results. We
propose biomedical ALBERT (A Lite Bidirectional Encoder Representations from
Transformers for Biomedical Text Mining) bioALBERT, an effective
domain-specific language model trained on large-scale biomedical corpora
designed to capture biomedical context-dependent NER. We adopted a
self-supervised loss used in ALBERT that focuses on modelling inter-sentence
coherence to better learn context-dependent representations and incorporated
parameter reduction techniques to lower memory consumption and increase the
training speed in BioNER. In our experiments, BioALBERT outperformed
comparative SOTA BioNER models on eight biomedical NER benchmark datasets with
four different entity types. We trained four different variants of BioALBERT
models which are available for the research community to be used in future
research.
Related papers
- Augmenting Biomedical Named Entity Recognition with General-domain Resources [47.24727904076347]
Training a neural network-based biomedical named entity recognition (BioNER) model usually requires extensive and costly human annotations.
We propose GERBERA, a simple-yet-effective method that utilized a general-domain NER dataset for training.
We systematically evaluated GERBERA on five datasets of eight entity types, collectively consisting of 81,410 instances.
arXiv Detail & Related papers (2024-06-15T15:28:02Z) - Multi-level biomedical NER through multi-granularity embeddings and
enhanced labeling [3.8599767910528917]
This paper proposes a hybrid approach that integrates the strengths of multiple models.
BERT provides contextualized word embeddings, a pre-trained multi-channel CNN for character-level information capture, and following by a BiLSTM + CRF for sequence labelling and modelling dependencies between the words in the text.
We evaluate our model on the benchmark i2b2/2010 dataset, achieving an F1-score of 90.11.
arXiv Detail & Related papers (2023-12-24T21:45:36Z) - Diversifying Knowledge Enhancement of Biomedical Language Models using
Adapter Modules and Knowledge Graphs [54.223394825528665]
We develop an approach that uses lightweight adapter modules to inject structured biomedical knowledge into pre-trained language models.
We use two large KGs, the biomedical knowledge system UMLS and the novel biochemical OntoChem, with two prominent biomedical PLMs, PubMedBERT and BioLinkBERT.
We show that our methodology leads to performance improvements in several instances while keeping requirements in computing power low.
arXiv Detail & Related papers (2023-12-21T14:26:57Z) - Biomedical Language Models are Robust to Sub-optimal Tokenization [30.175714262031253]
Most modern biomedical language models (LMs) are pre-trained using standard domain-specific tokenizers.
We find that pre-training a biomedical LM using a more accurate biomedical tokenizer does not improve the entity representation quality of a language model.
arXiv Detail & Related papers (2023-06-30T13:35:24Z) - BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical Tasks [68.39821375903591]
Generalist AI holds the potential to address limitations due to its versatility in interpreting different data types.
Here, we propose BiomedGPT, the first open-source and lightweight vision-language foundation model.
arXiv Detail & Related papers (2023-05-26T17:14:43Z) - BioAug: Conditional Generation based Data Augmentation for Low-Resource
Biomedical NER [52.79573512427998]
We present BioAug, a novel data augmentation framework for low-resource BioNER.
BioAug is trained to solve a novel text reconstruction task based on selective masking and knowledge augmentation.
We demonstrate the effectiveness of BioAug on 5 benchmark BioNER datasets.
arXiv Detail & Related papers (2023-05-18T02:04:38Z) - AIONER: All-in-one scheme-based biomedical named entity recognition
using deep learning [7.427654811697884]
We present AIONER, a general-purpose BioNER tool based on cutting-edge deep learning and our AIO schema.
AIONER is effective, robust, and compares favorably to other state-of-the-art approaches such as multi-task learning.
arXiv Detail & Related papers (2022-11-30T12:35:00Z) - BioGPT: Generative Pre-trained Transformer for Biomedical Text
Generation and Mining [140.61707108174247]
We propose BioGPT, a domain-specific generative Transformer language model pre-trained on large scale biomedical literature.
We get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks respectively, and 78.2% accuracy on PubMedQA.
arXiv Detail & Related papers (2022-10-19T07:17:39Z) - On the Effectiveness of Compact Biomedical Transformers [12.432191400869002]
Language models pre-trained on biomedical corpora have recently shown promising results on downstream biomedical tasks.
Many existing pre-trained models are resource-intensive and computationally heavy owing to factors such as embedding size, hidden dimension, and number of layers.
We introduce six lightweight models, namely, BioDistilBERT, BioTinyBERT, BioMobileBERT, DistilBioBERT, TinyBioBERT, and CompactBioBERT.
We evaluate all of our models on three biomedical tasks and compare them with BioBERT-v1.1 to create efficient lightweight models that perform on par with their larger counterparts.
arXiv Detail & Related papers (2022-09-07T14:24:04Z) - Domain-Specific Language Model Pretraining for Biomedical Natural
Language Processing [73.37262264915739]
We show that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains.
Our experiments show that domain-specific pretraining serves as a solid foundation for a wide range of biomedical NLP tasks.
arXiv Detail & Related papers (2020-07-31T00:04:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.