Inspire the Large Language Model by External Knowledge on BioMedical
Named Entity Recognition
- URL: http://arxiv.org/abs/2309.12278v1
- Date: Thu, 21 Sep 2023 17:39:53 GMT
- Title: Inspire the Large Language Model by External Knowledge on BioMedical
Named Entity Recognition
- Authors: Junyi Bian, Jiaxuan Zheng, Yuyi Zhang, Shanfeng Zhu
- Abstract summary: Large language models (LLMs) have demonstrated dominating performance in many NLP tasks, especially on generative tasks.
We leverage the LLM to solve the Biomedical NER task into entity span extraction and entity type determination.
Experimental results show a significant improvement in our two-step BioNER approach compared to previous few-shot LLM baseline.
- Score: 3.427366431933441
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) have demonstrated dominating performance in many
NLP tasks, especially on generative tasks. However, they often fall short in
some information extraction tasks, particularly those requiring domain-specific
knowledge, such as Biomedical Named Entity Recognition (NER). In this paper,
inspired by Chain-of-thought, we leverage the LLM to solve the Biomedical NER
step-by-step: break down the NER task into entity span extraction and entity
type determination. Additionally, for entity type determination, we inject
entity knowledge to address the problem that LLM's lack of domain knowledge
when predicting entity category. Experimental results show a significant
improvement in our two-step BioNER approach compared to previous few-shot LLM
baseline. Additionally, the incorporation of external knowledge significantly
enhances entity category determination performance.
Related papers
- Mitigating Hallucinations of Large Language Models in Medical Information Extraction via Contrastive Decoding [92.32881381717594]
We introduce ALternate Contrastive Decoding (ALCD) to solve hallucination issues in medical information extraction tasks.
ALCD demonstrates significant improvements in resolving hallucination issues compared to conventional decoding methods.
arXiv Detail & Related papers (2024-10-21T07:19:19Z) - LLM-DER:A Named Entity Recognition Method Based on Large Language Models for Chinese Coal Chemical Domain [4.639851504108679]
We propose a Large Language Models (LLMs)-based entity recognition framework LLM-DER for the domain-specific entity recognition problem in Chinese.
LLMs-DER generates a list of relationships containing entity types through LLMs, and designs a plausibility and consistency evaluation method to remove misrecognized entities.
The experimental results of this paper on the Resume dataset and the self-constructed coal chemical dataset Coal show that LLM-DER performs outstandingly in domain-specific entity recognition.
arXiv Detail & Related papers (2024-09-16T08:28:05Z) - Diagnostic Reasoning in Natural Language: Computational Model and Application [68.47402386668846]
We investigate diagnostic abductive reasoning (DAR) in the context of language-grounded tasks (NL-DAR)
We propose a novel modeling framework for NL-DAR based on Pearl's structural causal models.
We use the resulting dataset to investigate the human decision-making process in NL-DAR.
arXiv Detail & Related papers (2024-09-09T06:55:37Z) - LLMs are not Zero-Shot Reasoners for Biomedical Information Extraction [13.965777046473885]
Large Language Models (LLMs) are increasingly adopted for applications in healthcare.
It is unclear how well LLMs perform on tasks that are traditionally pursued in the biomedical domain.
arXiv Detail & Related papers (2024-08-22T09:37:40Z) - FactorLLM: Factorizing Knowledge via Mixture of Experts for Large Language Models [50.331708897857574]
We introduce FactorLLM, a novel approach that decomposes well-trained dense FFNs into sparse sub-networks without requiring any further modifications.
FactorLLM achieves comparable performance to the source model securing up to 85% model performance while obtaining over a 30% increase in inference speed.
arXiv Detail & Related papers (2024-08-15T16:45:16Z) - ProgGen: Generating Named Entity Recognition Datasets Step-by-step with Self-Reflexive Large Language Models [25.68491572293656]
Large Language Models fall short in structured knowledge extraction tasks such as named entity recognition.
This paper explores an innovative, cost-efficient strategy to harness LLMs with modest NER capabilities for producing superior NER datasets.
arXiv Detail & Related papers (2024-03-17T06:12:43Z) - EMBRE: Entity-aware Masking for Biomedical Relation Extraction [12.821610050561256]
We introduce the Entity-aware Masking for Biomedical Relation Extraction (EMBRE) method for relation extraction.
Specifically, we integrate entity knowledge into a deep neural network by pretraining the backbone model with an entity masking objective.
arXiv Detail & Related papers (2024-01-15T18:12:01Z) - Knowledge Plugins: Enhancing Large Language Models for Domain-Specific
Recommendations [50.81844184210381]
We propose a general paradigm that augments large language models with DOmain-specific KnowledgE to enhance their performance on practical applications, namely DOKE.
This paradigm relies on a domain knowledge extractor, working in three steps: 1) preparing effective knowledge for the task; 2) selecting the knowledge for each specific sample; and 3) expressing the knowledge in an LLM-understandable way.
arXiv Detail & Related papers (2023-11-16T07:09:38Z) - Incorporating Class-based Language Model for Named Entity Recognition in Factorized Neural Transducer [50.572974726351504]
We propose C-FNT, a novel E2E model that incorporates class-based LMs into FNT.
In C-FNT, the LM score of named entities can be associated with the name class instead of its surface form.
The experimental results show that our proposed C-FNT significantly reduces error in named entities without hurting performance in general word recognition.
arXiv Detail & Related papers (2023-09-14T12:14:49Z) - Nested Named Entity Recognition from Medical Texts: An Adaptive Shared
Network Architecture with Attentive CRF [53.55504611255664]
We propose a novel method, referred to as ASAC, to solve the dilemma caused by the nested phenomenon.
The proposed method contains two key modules: the adaptive shared (AS) part and the attentive conditional random field (ACRF) module.
Our model could learn better entity representations by capturing the implicit distinctions and relationships between different categories of entities.
arXiv Detail & Related papers (2022-11-09T09:23:56Z) - Boosting Low-Resource Biomedical QA via Entity-Aware Masking Strategies [25.990479833023166]
Biomedical question-answering (QA) has gained increased attention for its capability to provide users with high-quality information from a vast scientific literature.
We propose a simple yet unexplored approach, which we call biomedical entity-aware masking (BEM)
We encourage masked language models to learn entity-centric knowledge based on the pivotal entities characterizing the domain at hand, and employ those entities to drive the LM fine-tuning. Experimental results show performance on par with state-of-the-art models on several biomedical QA datasets.
arXiv Detail & Related papers (2021-02-16T18:51:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.