Recurrent Interaction Network for Jointly Extracting Entities and
Classifying Relations
- URL: http://arxiv.org/abs/2005.00162v2
- Date: Thu, 17 Sep 2020 02:54:49 GMT
- Title: Recurrent Interaction Network for Jointly Extracting Entities and
Classifying Relations
- Authors: Kai Sun, Richong Zhang, Samuel Mensah, Yongyi Mao, Xudong Liu
- Abstract summary: We design a multi-task learning model which allows the learning of interactions dynamically.
Empirical studies on two real-world datasets confirm the superiority of the proposed model.
- Score: 45.79634026256055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The idea of using multi-task learning approaches to address the joint
extraction of entity and relation is motivated by the relatedness between the
entity recognition task and the relation classification task. Existing methods
using multi-task learning techniques to address the problem learn interactions
among the two tasks through a shared network, where the shared information is
passed into the task-specific networks for prediction. However, such an
approach hinders the model from learning explicit interactions between the two
tasks to improve the performance on the individual tasks. As a solution, we
design a multi-task learning model which we refer to as recurrent interaction
network which allows the learning of interactions dynamically, to effectively
model task-specific features for classification. Empirical studies on two
real-world datasets confirm the superiority of the proposed model.
Related papers
- A Decoupling and Aggregating Framework for Joint Extraction of Entities and Relations [7.911978021993282]
We propose a novel model to jointly extract entities and relations.
We propose to decouple the feature encoding process into three parts, namely encoding subjects, encoding objects, and encoding relations.
Our model outperforms several previous state-of-the-art models.
arXiv Detail & Related papers (2024-05-14T04:27:16Z) - CARE: Co-Attention Network for Joint Entity and Relation Extraction [0.0]
We propose a Co-Attention network for joint entity and relation extraction.
Our approach includes adopting a parallel encoding strategy to learn separate representations for each subtask.
At the core of our approach is the co-attention module that captures two-way interaction between the two subtasks.
arXiv Detail & Related papers (2023-08-24T03:40:54Z) - Similarity-based Memory Enhanced Joint Entity and Relation Extraction [3.9659135716762894]
Document-level joint entity and relation extraction is a challenging information extraction problem.
We present a multi-task learning framework with bidirectional memory-like dependency between tasks.
Our empirical studies show that the proposed approach outperforms the existing methods.
arXiv Detail & Related papers (2023-07-14T12:26:56Z) - Saliency-Regularized Deep Multi-Task Learning [7.3810864598379755]
Multitask learning enforces multiple learning tasks to share knowledge to improve their generalization abilities.
Modern deep multitask learning can jointly learn latent features and task sharing, but they are obscure in task relation.
This paper proposes a new multitask learning framework that jointly learns latent features and explicit task relations.
arXiv Detail & Related papers (2022-07-03T20:26:44Z) - Towards Effective Multi-Task Interaction for Entity-Relation Extraction:
A Unified Framework with Selection Recurrent Network [4.477310325275069]
Entity-relation extraction aims to jointly solve named entity recognition (NER) and relation extraction (RE)
Recent approaches use either one-way sequential information propagation in a pipeline manner or two-way implicit interaction with a shared encoder.
We propose a novel and unified cascade framework that combines the advantages of both sequential information propagation and implicit interaction.
arXiv Detail & Related papers (2022-02-15T09:54:33Z) - On the relationship between disentanglement and multi-task learning [62.997667081978825]
We take a closer look at the relationship between disentanglement and multi-task learning based on hard parameter sharing.
We show that disentanglement appears naturally during the process of multi-task neural network training.
arXiv Detail & Related papers (2021-10-07T14:35:34Z) - DCR-Net: A Deep Co-Interactive Relation Network for Joint Dialog Act
Recognition and Sentiment Classification [77.59549450705384]
In dialog system, dialog act recognition and sentiment classification are two correlative tasks.
Most of the existing systems either treat them as separate tasks or just jointly model the two tasks.
We propose a Deep Co-Interactive Relation Network (DCR-Net) to explicitly consider the cross-impact and model the interaction between the two tasks.
arXiv Detail & Related papers (2020-08-16T14:13:32Z) - Reparameterizing Convolutions for Incremental Multi-Task Learning
without Task Interference [75.95287293847697]
Two common challenges in developing multi-task models are often overlooked in literature.
First, enabling the model to be inherently incremental, continuously incorporating information from new tasks without forgetting the previously learned ones (incremental learning)
Second, eliminating adverse interactions amongst tasks, which has been shown to significantly degrade the single-task performance in a multi-task setup (task interference)
arXiv Detail & Related papers (2020-07-24T14:44:46Z) - Adversarial Continual Learning [99.56738010842301]
We propose a hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features.
Our model combines architecture growth to prevent forgetting of task-specific skills and an experience replay approach to preserve shared skills.
arXiv Detail & Related papers (2020-03-21T02:08:17Z) - Cascaded Human-Object Interaction Recognition [175.60439054047043]
We introduce a cascade architecture for a multi-stage, coarse-to-fine HOI understanding.
At each stage, an instance localization network progressively refines HOI proposals and feeds them into an interaction recognition network.
With our carefully-designed human-centric relation features, these two modules work collaboratively towards effective interaction understanding.
arXiv Detail & Related papers (2020-03-09T17:05:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.