Survey on Reasoning Capabilities and Accessibility of Large Language Models Using Biology-related Questions
- URL: http://arxiv.org/abs/2406.16891v1
- Date: Sat, 11 May 2024 20:25:40 GMT
- Title: Survey on Reasoning Capabilities and Accessibility of Large Language Models Using Biology-related Questions
- Authors: Michael Ackerman,
- Abstract summary: This research paper discusses the advances made in the past decade in biomedicine and Large Language Models.
To understand how the advances have been made hand-in-hand with one another, the paper also discusses the integration of Natural Language Processing techniques and tools into biomedicine.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This research paper discusses the advances made in the past decade in biomedicine and Large Language Models. To understand how the advances have been made hand-in-hand with one another, the paper also discusses the integration of Natural Language Processing techniques and tools into biomedicine. Finally, the goal of this paper is to expand on a survey conducted last year (2023) by introducing a new list of questions and prompts for the top two language models. Through this survey, this paper seeks to quantify the improvement made in the reasoning abilities in LLMs and to what extent those improvements are felt by the average user. Additionally, this paper seeks to extend research on retrieval of biological literature by prompting the LLM to answer open-ended questions in great depth.
Related papers
- RAG and RAU: A Survey on Retrieval-Augmented Language Model in Natural Language Processing [0.2302001830524133]
This survey paper addresses the absence of a comprehensive overview on Retrieval-Augmented Language Models (RALMs)
The paper discusses the essential components of RALMs, including Retrievers, Language Models, and Augmentations.
RALMs demonstrate utility in a spectrum of tasks, from translation and dialogue systems to knowledge-intensive applications.
arXiv Detail & Related papers (2024-04-30T13:14:51Z) - Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers [81.47046536073682]
We present a review and provide a unified perspective to summarize the recent progress as well as emerging trends in multilingual large language models (MLLMs) literature.
We hope our work can provide the community with quick access and spur breakthrough research in MLLMs.
arXiv Detail & Related papers (2024-04-07T11:52:44Z) - An Evaluation of Large Language Models in Bioinformatics Research [52.100233156012756]
We study the performance of large language models (LLMs) on a wide spectrum of crucial bioinformatics tasks.
These tasks include the identification of potential coding regions, extraction of named entities for genes and proteins, detection of antimicrobial and anti-cancer peptides, molecular optimization, and resolution of educational bioinformatics problems.
Our findings indicate that, given appropriate prompts, LLMs like GPT variants can successfully handle most of these tasks.
arXiv Detail & Related papers (2024-02-21T11:27:31Z) - Diversifying Knowledge Enhancement of Biomedical Language Models using
Adapter Modules and Knowledge Graphs [54.223394825528665]
We develop an approach that uses lightweight adapter modules to inject structured biomedical knowledge into pre-trained language models.
We use two large KGs, the biomedical knowledge system UMLS and the novel biochemical OntoChem, with two prominent biomedical PLMs, PubMedBERT and BioLinkBERT.
We show that our methodology leads to performance improvements in several instances while keeping requirements in computing power low.
arXiv Detail & Related papers (2023-12-21T14:26:57Z) - Exploring the In-context Learning Ability of Large Language Model for
Biomedical Concept Linking [4.8882241537236455]
This research investigates a method that exploits the in-context learning capabilities of large models for biomedical concept linking.
The proposed approach adopts a two-stage retrieve-and-rank framework.
It achieved an accuracy of 90.% in BC5CDR disease entity normalization and 94.7% in chemical entity normalization.
arXiv Detail & Related papers (2023-07-03T16:19:50Z) - Improving Factuality and Reasoning in Language Models through Multiagent
Debate [95.10641301155232]
We present a complementary approach to improve language responses where multiple language model instances propose and debate their individual responses and reasoning processes over multiple rounds to arrive at a common final answer.
Our findings indicate that this approach significantly enhances mathematical and strategic reasoning across a number of tasks.
Our approach may be directly applied to existing black-box models and uses identical procedure and prompts for all tasks we investigate.
arXiv Detail & Related papers (2023-05-23T17:55:11Z) - A Survey for Biomedical Text Summarization: From Pre-trained to Large
Language Models [21.516351027053705]
We present a systematic review of recent advancements in biomedical text summarization.
We discuss existing challenges and promising future directions in the era of large language models.
To facilitate the research community, we line up open resources including available datasets, recent approaches, codes, evaluation metrics, and the leaderboard in a public project.
arXiv Detail & Related papers (2023-04-18T06:38:40Z) - A Bibliometric Review of Large Language Models Research from 2017 to
2023 [1.4190701053683017]
Large language models (LLMs) are language models that have demonstrated outstanding performance across a range of natural language processing (NLP) tasks.
This paper serves as a roadmap for researchers, practitioners, and policymakers to navigate the current landscape of LLMs research.
arXiv Detail & Related papers (2023-04-03T21:46:41Z) - Algorithmic Ghost in the Research Shell: Large Language Models and
Academic Knowledge Creation in Management Research [0.0]
The paper looks at the role of large language models in academic knowledge creation.
This includes writing, editing, reviewing, dataset creation and curation.
arXiv Detail & Related papers (2023-03-10T14:25:29Z) - Reasoning with Language Model Prompting: A Survey [86.96133788869092]
Reasoning, as an essential ability for complex problem-solving, can provide back-end support for various real-world applications.
This paper provides a comprehensive survey of cutting-edge research on reasoning with language model prompting.
arXiv Detail & Related papers (2022-12-19T16:32:42Z) - A Survey on Text Classification: From Shallow to Deep Learning [83.47804123133719]
The last decade has seen a surge of research in this area due to the unprecedented success of deep learning.
This paper fills the gap by reviewing the state-of-the-art approaches from 1961 to 2021.
We create a taxonomy for text classification according to the text involved and the models used for feature extraction and classification.
arXiv Detail & Related papers (2020-08-02T00:09:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.