The Evolving Landscape of Generative Large Language Models and Traditional Natural Language Processing in Medicine
- URL: http://arxiv.org/abs/2505.10261v1
- Date: Thu, 15 May 2025 13:11:14 GMT
- Title: The Evolving Landscape of Generative Large Language Models and Traditional Natural Language Processing in Medicine
- Authors: Rui Yang, Huitao Li, Matthew Yu Heng Wong, Yuhe Ke, Xin Li, Kunyu Yu, Jingchi Liao, Jonathan Chong Kai Liew, Sabarinath Vinod Nair, Jasmine Chiat Ling Ong, Irene Li, Douglas Teodoro, Chuan Hong, Daniel Shu Wei Ting, Nan Liu,
- Abstract summary: generative large language models (LLMs) have become prominent recently, but differences between them across different medical tasks remain underexplored.<n>We analyzed 19,123 studies, finding that generative LLMs demonstrate advantages in open-ended tasks, while traditional NLP dominates in information extraction and analysis tasks.
- Score: 11.237277421599027
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Natural language processing (NLP) has been traditionally applied to medicine, and generative large language models (LLMs) have become prominent recently. However, the differences between them across different medical tasks remain underexplored. We analyzed 19,123 studies, finding that generative LLMs demonstrate advantages in open-ended tasks, while traditional NLP dominates in information extraction and analysis tasks. As these technologies advance, ethical use of them is essential to ensure their potential in medical applications.
Related papers
- ImmunoFOMO: Are Language Models missing what oncologists see? [2.8544513613730205]
We investigate the medical conceptual grounding of various language models against expert clinicians for identification of hallmarks of immunotherapy in breast cancer abstracts.<n>Our results show that pre-trained language models have potential to outperform large language models in identifying very specific (low-level) concepts.
arXiv Detail & Related papers (2025-06-13T06:00:03Z) - Large language models for mental health [10.592145325363266]
Digital technologies have long been explored as a complement to standard procedure in mental health research and practice.
The recent emergence of large language models (LLMs) represents a major new opportunity on that front.
Yet there is still a divide between the community developing LLMs and the one which may benefit from them.
arXiv Detail & Related papers (2024-11-04T14:02:00Z) - Retrieve, Generate, Evaluate: A Case Study for Medical Paraphrases Generation with Small Language Models [2.4851820343103035]
We introduce pRAGe, a pipeline for Retrieval Augmented Generation and evaluation of medical paraphrases generation using Small Language Models (SLM)
We study the effectiveness of SLMs and the impact of external knowledge base for medical paraphrase generation in French.
arXiv Detail & Related papers (2024-07-23T15:17:11Z) - Can Large Language Models abstract Medical Coded Language? [0.0]
Large language models (LLMs) are aware of medical code and can accurately generate names from these codes.
This study evaluates whether large language models (LLMs) are aware of medical code and can accurately generate names from these codes.
arXiv Detail & Related papers (2024-03-16T06:18:15Z) - History, Development, and Principles of Large Language Models-An Introductory Survey [15.875687167037206]
Language models serve as a cornerstone in natural language processing (NLP)
Over extensive research spanning decades, language modeling has progressed from initial statistical language models (SLMs) to the contemporary landscape of large language models (LLMs)
arXiv Detail & Related papers (2024-02-10T01:18:15Z) - Large Language Model Distilling Medication Recommendation Model [58.94186280631342]
We harness the powerful semantic comprehension and input-agnostic characteristics of Large Language Models (LLMs)<n>Our research aims to transform existing medication recommendation methodologies using LLMs.<n>To mitigate this, we have developed a feature-level knowledge distillation technique, which transfers the LLM's proficiency to a more compact model.
arXiv Detail & Related papers (2024-02-05T08:25:22Z) - Diversifying Knowledge Enhancement of Biomedical Language Models using
Adapter Modules and Knowledge Graphs [54.223394825528665]
We develop an approach that uses lightweight adapter modules to inject structured biomedical knowledge into pre-trained language models.
We use two large KGs, the biomedical knowledge system UMLS and the novel biochemical OntoChem, with two prominent biomedical PLMs, PubMedBERT and BioLinkBERT.
We show that our methodology leads to performance improvements in several instances while keeping requirements in computing power low.
arXiv Detail & Related papers (2023-12-21T14:26:57Z) - Redefining Digital Health Interfaces with Large Language Models [69.02059202720073]
Large Language Models (LLMs) have emerged as general-purpose models with the ability to process complex information.
We show how LLMs can provide a novel interface between clinicians and digital technologies.
We develop a new prognostic tool using automated machine learning.
arXiv Detail & Related papers (2023-10-05T14:18:40Z) - Evaluating Large Language Models for Radiology Natural Language
Processing [68.98847776913381]
The rise of large language models (LLMs) has marked a pivotal shift in the field of natural language processing (NLP)
This study seeks to bridge this gap by critically evaluating thirty two LLMs in interpreting radiology reports.
arXiv Detail & Related papers (2023-07-25T17:57:18Z) - A Survey of Large Language Models [81.06947636926638]
Language modeling has been widely studied for language understanding and generation in the past two decades.
Recently, pre-trained language models (PLMs) have been proposed by pre-training Transformer models over large-scale corpora.
To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size.
arXiv Detail & Related papers (2023-03-31T17:28:46Z) - Language Models sounds the Death Knell of Knowledge Graphs [0.0]
Deep Learning based NLP especially Large Language Models (LLMs) have found broad acceptance and are used extensively for many applications.
BioBERT and Med-BERT are language models pre-trained for the healthcare domain.
This paper argues that using Knowledge Graphs is not the best solution for solving problems in this domain.
arXiv Detail & Related papers (2023-01-10T14:20:15Z) - LMPriors: Pre-Trained Language Models as Task-Specific Priors [78.97143833642971]
We develop principled techniques for augmenting our models with suitable priors.
This is to encourage them to learn in ways that are compatible with our understanding of the world.
We draw inspiration from the recent successes of large-scale language models (LMs) to construct task-specific priors distilled from the rich knowledge of LMs.
arXiv Detail & Related papers (2022-10-22T19:09:18Z) - Pre-trained Language Models in Biomedical Domain: A Systematic Survey [33.572502204216256]
Pre-trained language models (PLMs) have been the de facto paradigm for most natural language processing (NLP) tasks.
This paper summarizes the recent progress of pre-trained language models in the biomedical domain and their applications in biomedical downstream tasks.
arXiv Detail & Related papers (2021-10-11T05:30:30Z) - Domain-Specific Language Model Pretraining for Biomedical Natural
Language Processing [73.37262264915739]
We show that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains.
Our experiments show that domain-specific pretraining serves as a solid foundation for a wide range of biomedical NLP tasks.
arXiv Detail & Related papers (2020-07-31T00:04:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.