An Evaluation of Large Language Models in Bioinformatics Research
- URL: http://arxiv.org/abs/2402.13714v1
- Date: Wed, 21 Feb 2024 11:27:31 GMT
- Title: An Evaluation of Large Language Models in Bioinformatics Research
- Authors: Hengchuang Yin, Zhonghui Gu, Fanhao Wang, Yiparemu Abuduhaibaier,
Yanqiao Zhu, Xinming Tu, Xian-Sheng Hua, Xiao Luo, Yizhou Sun
- Abstract summary: We study the performance of large language models (LLMs) on a wide spectrum of crucial bioinformatics tasks.
These tasks include the identification of potential coding regions, extraction of named entities for genes and proteins, detection of antimicrobial and anti-cancer peptides, molecular optimization, and resolution of educational bioinformatics problems.
Our findings indicate that, given appropriate prompts, LLMs like GPT variants can successfully handle most of these tasks.
- Score: 52.100233156012756
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) such as ChatGPT have gained considerable
interest across diverse research communities. Their notable ability for text
completion and generation has inaugurated a novel paradigm for
language-interfaced problem solving. However, the potential and efficacy of
these models in bioinformatics remain incompletely explored. In this work, we
study the performance LLMs on a wide spectrum of crucial bioinformatics tasks.
These tasks include the identification of potential coding regions, extraction
of named entities for genes and proteins, detection of antimicrobial and
anti-cancer peptides, molecular optimization, and resolution of educational
bioinformatics problems. Our findings indicate that, given appropriate prompts,
LLMs like GPT variants can successfully handle most of these tasks. In
addition, we provide a thorough analysis of their limitations in the context of
complicated bioinformatics tasks. In conclusion, we believe that this work can
provide new perspectives and motivate future research in the field of LLMs
applications, AI for Science and bioinformatics.
Related papers
- BioDiscoveryAgent: An AI Agent for Designing Genetic Perturbation Experiments [116.43369600518163]
We develop BioDiscoveryAgent, an agent that designs new experiments, reasons about their outcomes, and efficiently navigates the hypothesis space to reach desired solutions.
BioDiscoveryAgent can uniquely design new experiments without the need to train a machine learning model or explicitly design an acquisition function.
It achieves an average of 18% improvement in detecting desired phenotypes across five datasets.
arXiv Detail & Related papers (2024-05-27T19:57:17Z) - Leveraging Biomolecule and Natural Language through Multi-Modal
Learning: A Survey [75.47055414002571]
The integration of biomolecular modeling with natural language (BL) has emerged as a promising interdisciplinary area at the intersection of artificial intelligence, chemistry and biology.
We provide an analysis of recent advancements achieved through cross modeling of biomolecules and natural language.
arXiv Detail & Related papers (2024-03-03T14:59:47Z) - Progress and Opportunities of Foundation Models in Bioinformatics [77.74411726471439]
Foundations models (FMs) have ushered in a new era in computational biology, especially in the realm of deep learning.
Central to our focus is the application of FMs to specific biological problems, aiming to guide the research community in choosing appropriate FMs for their research needs.
Review analyses challenges and limitations faced by FMs in biology, such as data noise, model explainability, and potential biases.
arXiv Detail & Related papers (2024-02-06T02:29:17Z) - Large language models in bioinformatics: applications and perspectives [14.16418711188321]
Large language models (LLMs) are artificial intelligence models based on deep learning.
This review focuses on exploring the applications of large language models in genomics, transcriptomics, drug discovery and single cell analysis.
arXiv Detail & Related papers (2024-01-08T17:26:59Z) - Exploring the Effectiveness of Instruction Tuning in Biomedical Language
Processing [19.41164870575055]
This study investigates the potential of instruction tuning for biomedical language processing.
We present a comprehensive, instruction-based model trained on a dataset that consists of approximately $200,000$ instruction-focused samples.
arXiv Detail & Related papers (2023-12-31T20:02:10Z) - Diversifying Knowledge Enhancement of Biomedical Language Models using
Adapter Modules and Knowledge Graphs [54.223394825528665]
We develop an approach that uses lightweight adapter modules to inject structured biomedical knowledge into pre-trained language models.
We use two large KGs, the biomedical knowledge system UMLS and the novel biochemical OntoChem, with two prominent biomedical PLMs, PubMedBERT and BioLinkBERT.
We show that our methodology leads to performance improvements in several instances while keeping requirements in computing power low.
arXiv Detail & Related papers (2023-12-21T14:26:57Z) - ProBio: A Protocol-guided Multimodal Dataset for Molecular Biology Lab [67.24684071577211]
The challenge of replicating research results has posed a significant impediment to the field of molecular biology.
We first curate a comprehensive multimodal dataset, named ProBio, as an initial step towards this objective.
Next, we devise two challenging benchmarks, transparent solution tracking and multimodal action recognition, to emphasize the unique characteristics and difficulties associated with activity understanding in BioLab settings.
arXiv Detail & Related papers (2023-11-01T14:44:01Z) - Bio-SIEVE: Exploring Instruction Tuning Large Language Models for
Systematic Review Automation [6.452837513222072]
Large Language Models (LLMs) can support and be trained to perform literature screening for medical systematic reviews.
Our best model, Bio-SIEVE, outperforms both ChatGPT and trained traditional approaches.
We see Bio-SIEVE as an important step towards specialising LLMs for the biomedical systematic review process.
arXiv Detail & Related papers (2023-08-12T16:56:55Z) - Comparative Performance Evaluation of Large Language Models for
Extracting Molecular Interactions and Pathway Knowledge [6.244840529371179]
understanding protein interactions and pathway knowledge is crucial for unraveling the complexities of living systems.
Existing databases provide curated biological data from literature and other sources, but their maintenance is labor-intensive.
We propose to harness the capabilities of large language models to address these issues by automatically extracting such knowledge from the relevant scientific literature.
arXiv Detail & Related papers (2023-07-17T20:01:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.