The Joint Entity-Relation Extraction Model Based on Span and Interactive Fusion Representation for Chinese Medical Texts with Complex Semantics
- URL: http://arxiv.org/abs/2502.09247v1
- Date: Thu, 13 Feb 2025 12:03:36 GMT
- Title: The Joint Entity-Relation Extraction Model Based on Span and Interactive Fusion Representation for Chinese Medical Texts with Complex Semantics
- Authors: Danni Feng, Runzhi Li, Jing Wang, Siyu Yan, Lihong Ma, Yunli Xing,
- Abstract summary: Joint entity-relation extraction is a critical task in transforming unstructured or semi-structured text into triplets.<n>We introduce the CH-DDI, a Chinese drug-drug interactions dataset designed to capture the intricacies of medical text.<n>We propose the SEA module, which enhances the extraction of complex contextual semantic information.
- Score: 2.3873713384588378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Joint entity-relation extraction is a critical task in transforming unstructured or semi-structured text into triplets, facilitating the construction of large-scale knowledge graphs, and supporting various downstream applications. Despite its importance, research on Chinese text, particularly with complex semantics in specialized domains like medicine, remains limited. To address this gap, we introduce the CH-DDI, a Chinese drug-drug interactions dataset designed to capture the intricacies of medical text. Leveraging the strengths of attention mechanisms in capturing long-range dependencies, we propose the SEA module, which enhances the extraction of complex contextual semantic information, thereby improving entity recognition and relation extraction. Additionally, to address the inefficiencies of existing methods in facilitating information exchange between entity recognition and relation extraction, we present an interactive fusion representation module. This module employs Cross Attention for bidirectional information exchange between the tasks and further refines feature extraction through BiLSTM. Experimental results on both our CH-DDI dataset and public CoNLL04 dataset demonstrate that our model exhibits strong generalization capabilities. On the CH-DDI dataset, our model achieves an F1-score of 96.73% for entity recognition and 78.43% for relation extraction. On the CoNLL04 dataset, it attains an entity recognition precision of 89.54% and a relation extraction accuracy of 71.64%.
Related papers
- Enhanced Multi-Tuple Extraction for Alloys: Integrating Pointer Networks and Augmented Attention [6.938202451113495]
We present a novel framework that combines an extraction model based on MatSciBERT with pointer and an allocation model.
Our experiments on extraction demonstrate impressive F1 scores of 0.947, 0.93 and 0.753 across datasets.
These results highlight the model's capacity to deliver precise and structured information.
arXiv Detail & Related papers (2025-03-10T02:39:06Z) - Learning to Extract Structured Entities Using Language Models [52.281701191329]
Recent advances in machine learning have significantly impacted the field of information extraction.
We reformulate the task to be entity-centric, enabling the use of diverse metrics.
We contribute to the field by introducing Structured Entity Extraction and proposing the Approximate Entity Set OverlaP metric.
arXiv Detail & Related papers (2024-02-06T22:15:09Z) - Joint Extraction of Uyghur Medicine Knowledge with Edge Computing [1.4223082738595538]
CoEx-Bert is a joint extraction model with parameter sharing in edge computing.
It achieves accuracy, recall, and F1 scores of 90.65%, 92.45%, and 91.54%, respectively, in the Uyghur traditional medical dataset.
arXiv Detail & Related papers (2024-01-13T08:27:24Z) - Benchingmaking Large Langage Models in Biomedical Triple Extraction [13.022101126299269]
This work mainly focuses on sentence-level biomedical triple extraction.
The absence of a high-quality biomedical triple extraction dataset impedes the progress in developing robust triple extraction systems.
We present GIT, an expert-annotated biomedical triple extraction dataset.
arXiv Detail & Related papers (2023-10-27T20:15:23Z) - Exploring Attention Mechanisms in Integration of Multi-Modal Information for Sign Language Recognition and Translation [2.634214928675537]
We propose a plugin module based on cross-attention to properly attend to each modality with another.
We have evaluated the performance of our approaches on the RWTH-PHOENIX-2014 dataset for sign language recognition and the RWTH-PHOENIX-2014T dataset for the sign language translation task.
arXiv Detail & Related papers (2023-09-04T23:31:29Z) - CARE: Co-Attention Network for Joint Entity and Relation Extraction [0.0]
We propose a Co-Attention network for joint entity and relation extraction.
Our approach includes adopting a parallel encoding strategy to learn separate representations for each subtask.
At the core of our approach is the co-attention module that captures two-way interaction between the two subtasks.
arXiv Detail & Related papers (2023-08-24T03:40:54Z) - Multi-Grained Multimodal Interaction Network for Entity Linking [65.30260033700338]
Multimodal entity linking task aims at resolving ambiguous mentions to a multimodal knowledge graph.
We propose a novel Multi-GraIned Multimodal InteraCtion Network $textbf(MIMIC)$ framework for solving the MEL task.
arXiv Detail & Related papers (2023-07-19T02:11:19Z) - Integrating Heterogeneous Domain Information into Relation Extraction: A
Case Study on Drug-Drug Interaction Extraction [1.0152838128195465]
This thesis works on Drug-Drug Interactions (DDIs) from the literature as a case study.
A deep neural relation extraction model is prepared and its attention mechanism is analyzed.
In order to further exploit the heterogeneous information, drug-related items, such as protein entries, medical terms and pathways are collected.
arXiv Detail & Related papers (2022-12-21T01:26:07Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - D-REX: Dialogue Relation Extraction with Explanations [65.3862263565638]
This work focuses on extracting explanations that indicate that a relation exists while using only partially labeled data.
We propose our model-agnostic framework, D-REX, a policy-guided semi-supervised algorithm that explains and ranks relations.
We find that about 90% of the time, human annotators prefer D-REX's explanations over a strong BERT-based joint relation extraction and explanation model.
arXiv Detail & Related papers (2021-09-10T22:30:48Z) - A Trigger-Sense Memory Flow Framework for Joint Entity and Relation
Extraction [5.059120569845976]
We present a Trigger-Sense Memory Flow Framework (TriMF) for joint entity and relation extraction.
We build a memory module to remember category representations learned in entity recognition and relation extraction tasks.
We also design a multi-level memory flow attention mechanism to enhance the bi-directional interaction between entity recognition and relation extraction.
arXiv Detail & Related papers (2021-01-25T16:24:04Z) - Cross-Supervised Joint-Event-Extraction with Heterogeneous Information
Networks [61.950353376870154]
Joint-event-extraction is a sequence-to-sequence labeling task with a tag set composed of tags of triggers and entities.
We propose a Cross-Supervised Mechanism (CSM) to alternately supervise the extraction of triggers or entities.
Our approach outperforms the state-of-the-art methods in both entity and trigger extraction.
arXiv Detail & Related papers (2020-10-13T11:51:17Z) - Leveraging Semantic Parsing for Relation Linking over Knowledge Bases [80.99588366232075]
We present SLING, a relation linking framework which leverages semantic parsing using AMR and distant supervision.
SLING integrates multiple relation linking approaches that capture complementary signals such as linguistic cues, rich semantic representation, and information from the knowledgebase.
experiments on relation linking using three KBQA datasets; QALD-7, QALD-9, and LC-QuAD 1.0 demonstrate that the proposed approach achieves state-of-the-art performance on all benchmarks.
arXiv Detail & Related papers (2020-09-16T14:56:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.