BioGPT: Generative Pre-trained Transformer for Biomedical Text
Generation and Mining
- URL: http://arxiv.org/abs/2210.10341v3
- Date: Mon, 3 Apr 2023 06:49:33 GMT
- Title: BioGPT: Generative Pre-trained Transformer for Biomedical Text
Generation and Mining
- Authors: Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon,
Tie-Yan Liu
- Abstract summary: We propose BioGPT, a domain-specific generative Transformer language model pre-trained on large scale biomedical literature.
We get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks respectively, and 78.2% accuracy on PubMedQA.
- Score: 140.61707108174247
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pre-trained language models have attracted increasing attention in the
biomedical domain, inspired by their great success in the general natural
language domain. Among the two main branches of pre-trained language models in
the general language domain, i.e., BERT (and its variants) and GPT (and its
variants), the first one has been extensively studied in the biomedical domain,
such as BioBERT and PubMedBERT. While they have achieved great success on a
variety of discriminative downstream biomedical tasks, the lack of generation
ability constrains their application scope. In this paper, we propose BioGPT, a
domain-specific generative Transformer language model pre-trained on large
scale biomedical literature. We evaluate BioGPT on six biomedical NLP tasks and
demonstrate that our model outperforms previous models on most tasks.
Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI
end-to-end relation extraction tasks respectively, and 78.2% accuracy on
PubMedQA, creating a new record. Our case study on text generation further
demonstrates the advantage of BioGPT on biomedical literature to generate
fluent descriptions for biomedical terms. Code is available at
https://github.com/microsoft/BioGPT.
Related papers
- BioMedLM: A 2.7B Parameter Language Model Trained On Biomedical Text [82.7001841679981]
BioMedLM is a 2.7 billion parameter GPT-style autoregressive model trained exclusively on PubMed abstracts and full articles.
When fine-tuned, BioMedLM can produce strong multiple-choice biomedical question-answering results competitive with larger models.
BioMedLM can also be fine-tuned to produce useful answers to patient questions on medical topics.
arXiv Detail & Related papers (2024-03-27T10:18:21Z) - BioT5+: Towards Generalized Biological Understanding with IUPAC Integration and Multi-task Tuning [77.90250740041411]
This paper introduces BioT5+, an extension of the BioT5 framework, tailored to enhance biological research and drug discovery.
BioT5+ incorporates several novel features: integration of IUPAC names for molecular understanding, inclusion of extensive bio-text and molecule data from sources like bioRxiv and PubChem, the multi-task instruction tuning for generality across tasks, and a numerical tokenization technique for improved processing of numerical data.
arXiv Detail & Related papers (2024-02-27T12:43:09Z) - Diversifying Knowledge Enhancement of Biomedical Language Models using
Adapter Modules and Knowledge Graphs [54.223394825528665]
We develop an approach that uses lightweight adapter modules to inject structured biomedical knowledge into pre-trained language models.
We use two large KGs, the biomedical knowledge system UMLS and the novel biochemical OntoChem, with two prominent biomedical PLMs, PubMedBERT and BioLinkBERT.
We show that our methodology leads to performance improvements in several instances while keeping requirements in computing power low.
arXiv Detail & Related papers (2023-12-21T14:26:57Z) - Evaluation of ChatGPT on Biomedical Tasks: A Zero-Shot Comparison with
Fine-Tuned Generative Transformers [2.5027382653219155]
ChatGPT is a large language model developed by OpenAI.
This paper aims to evaluate the performance of ChatGPT on various benchmark biomedical tasks.
arXiv Detail & Related papers (2023-06-07T15:11:26Z) - BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical Tasks [68.39821375903591]
Generalist AI holds the potential to address limitations due to its versatility in interpreting different data types.
Here, we propose BiomedGPT, the first open-source and lightweight vision-language foundation model.
arXiv Detail & Related papers (2023-05-26T17:14:43Z) - BioAug: Conditional Generation based Data Augmentation for Low-Resource
Biomedical NER [52.79573512427998]
We present BioAug, a novel data augmentation framework for low-resource BioNER.
BioAug is trained to solve a novel text reconstruction task based on selective masking and knowledge augmentation.
We demonstrate the effectiveness of BioAug on 5 benchmark BioNER datasets.
arXiv Detail & Related papers (2023-05-18T02:04:38Z) - Bioformer: an efficient transformer language model for biomedical text
mining [8.961510810015643]
We present Bioformer, a compact BERT model for biomedical text mining.
We pretrained two Bioformer models which reduced the model size by 60% compared to BERTBase.
With 60% fewer parameters, Bioformer16L is only 0.1% less accurate than PubMedBERT.
arXiv Detail & Related papers (2023-02-03T08:04:59Z) - BioBART: Pretraining and Evaluation of A Biomedical Generative Language
Model [1.1764594853212893]
In this work, we introduce the generative language model BioBART that adapts BART to the biomedical domain.
We collate various biomedical language generation tasks including dialogue, summarization, entity linking, and named entity recognition.
BioBART pretrained on PubMed abstracts has enhanced performance compared to BART and set strong baselines on several tasks.
arXiv Detail & Related papers (2022-04-08T08:07:42Z) - Pre-trained Language Models in Biomedical Domain: A Systematic Survey [33.572502204216256]
Pre-trained language models (PLMs) have been the de facto paradigm for most natural language processing (NLP) tasks.
This paper summarizes the recent progress of pre-trained language models in the biomedical domain and their applications in biomedical downstream tasks.
arXiv Detail & Related papers (2021-10-11T05:30:30Z) - BioALBERT: A Simple and Effective Pre-trained Language Model for
Biomedical Named Entity Recognition [9.05154470433578]
Existing BioNER approaches often neglect these issues and directly adopt the state-of-the-art (SOTA) models.
We propose biomedical ALBERT, an effective domain-specific language model trained on large-scale biomedical corpora.
arXiv Detail & Related papers (2020-09-19T12:58:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.