A Trigger-Sense Memory Flow Framework for Joint Entity and Relation
Extraction
- URL: http://arxiv.org/abs/2101.10213v2
- Date: Wed, 17 Feb 2021 14:59:53 GMT
- Title: A Trigger-Sense Memory Flow Framework for Joint Entity and Relation
Extraction
- Authors: Yongliang Shen, Xinyin Ma, Yechun Tang, Weiming Lu
- Abstract summary: We present a Trigger-Sense Memory Flow Framework (TriMF) for joint entity and relation extraction.
We build a memory module to remember category representations learned in entity recognition and relation extraction tasks.
We also design a multi-level memory flow attention mechanism to enhance the bi-directional interaction between entity recognition and relation extraction.
- Score: 5.059120569845976
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Joint entity and relation extraction framework constructs a unified model to
perform entity recognition and relation extraction simultaneously, which can
exploit the dependency between the two tasks to mitigate the error propagation
problem suffered by the pipeline model. Current efforts on joint entity and
relation extraction focus on enhancing the interaction between entity
recognition and relation extraction through parameter sharing, joint decoding,
or other ad-hoc tricks (e.g., modeled as a semi-Markov decision process, cast
as a multi-round reading comprehension task). However, there are still two
issues on the table. First, the interaction utilized by most methods is still
weak and uni-directional, which is unable to model the mutual dependency
between the two tasks. Second, relation triggers are ignored by most methods,
which can help explain why humans would extract a relation in the sentence.
They're essential for relation extraction but overlooked. To this end, we
present a Trigger-Sense Memory Flow Framework (TriMF) for joint entity and
relation extraction. We build a memory module to remember category
representations learned in entity recognition and relation extraction tasks.
And based on it, we design a multi-level memory flow attention mechanism to
enhance the bi-directional interaction between entity recognition and relation
extraction. Moreover, without any human annotations, our model can enhance
relation trigger information in a sentence through a trigger sensor module,
which improves the model performance and makes model predictions with better
interpretation. Experiment results show that our proposed framework achieves
state-of-the-art results by improves the relation F1 to 52.44% (+3.2%) on
SciERC, 66.49% (+4.9%) on ACE05, 72.35% (+0.6%) on CoNLL04 and 80.66% (+2.3%)
on ADE.
Related papers
- Joint Extraction of Uyghur Medicine Knowledge with Edge Computing [1.4223082738595538]
CoEx-Bert is a joint extraction model with parameter sharing in edge computing.
It achieves accuracy, recall, and F1 scores of 90.65%, 92.45%, and 91.54%, respectively, in the Uyghur traditional medical dataset.
arXiv Detail & Related papers (2024-01-13T08:27:24Z) - CARE: Co-Attention Network for Joint Entity and Relation Extraction [0.0]
We propose a Co-Attention network for joint entity and relation extraction.
Our approach includes adopting a parallel encoding strategy to learn separate representations for each subtask.
At the core of our approach is the co-attention module that captures two-way interaction between the two subtasks.
arXiv Detail & Related papers (2023-08-24T03:40:54Z) - Mutually Guided Few-shot Learning for Relational Triple Extraction [10.539566491939844]
Mutually Guided Few-shot learning framework for Triple Extraction (MG-FTE)
Our method consists of an entity-guided relation-decoder to classify relations and a proto-decoder to extract entities.
Our method outperforms many state-of-the-art methods by 12.6 F1 score on FewRel 1.0 (single domain) and 20.5 F1 score on FewRel 2.0 (cross-domain)
arXiv Detail & Related papers (2023-06-23T06:15:54Z) - HIORE: Leveraging High-order Interactions for Unified Entity Relation
Extraction [85.80317530027212]
We propose HIORE, a new method for unified entity relation extraction.
The key insight is to leverage the complex association among word pairs, which contains richer information than the first-order word-by-word interactions.
Experiments show that HIORE achieves the state-of-the-art performance on relation extraction and an improvement of 1.11.8 F1 points over the prior best unified model.
arXiv Detail & Related papers (2023-05-07T14:57:42Z) - UniRel: Unified Representation and Interaction for Joint Relational
Triple Extraction [29.15806644012706]
We propose UniRel to address the challenges of capturing rich correlations between entities and relations.
Specifically, we unify representations of entities and relations by jointly encoding them within a relationald natural language sequence.
With comprehensive experiments on two popular triple extraction datasets, we demonstrate that UniRel is more effective computationally efficient.
arXiv Detail & Related papers (2022-11-16T16:53:13Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - D-REX: Dialogue Relation Extraction with Explanations [65.3862263565638]
This work focuses on extracting explanations that indicate that a relation exists while using only partially labeled data.
We propose our model-agnostic framework, D-REX, a policy-guided semi-supervised algorithm that explains and ranks relations.
We find that about 90% of the time, human annotators prefer D-REX's explanations over a strong BERT-based joint relation extraction and explanation model.
arXiv Detail & Related papers (2021-09-10T22:30:48Z) - A Frustratingly Easy Approach for Entity and Relation Extraction [25.797992240847833]
We present a simple pipelined approach for entity and relation extraction.
We establish the new state-of-the-art on standard benchmarks (ACE04, ACE05 and SciERC)
Our approach essentially builds on two independent encoders and merely uses the entity model to construct the input for the relation model.
arXiv Detail & Related papers (2020-10-24T07:14:01Z) - Cross-Supervised Joint-Event-Extraction with Heterogeneous Information
Networks [61.950353376870154]
Joint-event-extraction is a sequence-to-sequence labeling task with a tag set composed of tags of triggers and entities.
We propose a Cross-Supervised Mechanism (CSM) to alternately supervise the extraction of triggers or entities.
Our approach outperforms the state-of-the-art methods in both entity and trigger extraction.
arXiv Detail & Related papers (2020-10-13T11:51:17Z) - A Co-Interactive Transformer for Joint Slot Filling and Intent Detection [61.109486326954205]
Intent detection and slot filling are two main tasks for building a spoken language understanding (SLU) system.
Previous studies either model the two tasks separately or only consider the single information flow from intent to slot.
We propose a Co-Interactive Transformer to consider the cross-impact between the two tasks simultaneously.
arXiv Detail & Related papers (2020-10-08T10:16:52Z) - Relation of the Relations: A New Paradigm of the Relation Extraction
Problem [52.21210549224131]
We propose a new paradigm of Relation Extraction (RE) that considers as a whole the predictions of all relations in the same context.
We develop a data-driven approach that does not require hand-crafted rules but learns by itself the relation of relations (RoR) using Graph Neural Networks and a relation matrix transformer.
Experiments show that our model outperforms the state-of-the-art approaches by +1.12% on the ACE05 dataset and +2.55% on SemEval 2018 Task 7.2.
arXiv Detail & Related papers (2020-06-05T22:25:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.