TPLinker: Single-stage Joint Extraction of Entities and Relations
Through Token Pair Linking
- URL: http://arxiv.org/abs/2010.13415v1
- Date: Mon, 26 Oct 2020 08:35:06 GMT
- Title: TPLinker: Single-stage Joint Extraction of Entities and Relations
Through Token Pair Linking
- Authors: Yucheng Wang, Bowen Yu, Yueyang Zhang, Tingwen Liu, Hongsong Zhu and
Limin Sun
- Abstract summary: We propose a one-stage joint extraction model, TPLinker, which is capable of discovering overlapping relations sharing one or both entities.
Experiment results show that TPLinker performs significantly better on overlapping and multiple relation extraction, and achieves state-of-the-art performance on two public datasets.
- Score: 20.526728682326358
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extracting entities and relations from unstructured text has attracted
increasing attention in recent years but remains challenging, due to the
intrinsic difficulty in identifying overlapping relations with shared entities.
Prior works show that joint learning can result in a noticeable performance
gain. However, they usually involve sequential interrelated steps and suffer
from the problem of exposure bias. At training time, they predict with the
ground truth conditions while at inference it has to make extraction from
scratch. This discrepancy leads to error accumulation. To mitigate the issue,
we propose in this paper a one-stage joint extraction model, namely, TPLinker,
which is capable of discovering overlapping relations sharing one or both
entities while immune from the exposure bias. TPLinker formulates joint
extraction as a token pair linking problem and introduces a novel handshaking
tagging scheme that aligns the boundary tokens of entity pairs under each
relation type. Experiment results show that TPLinker performs significantly
better on overlapping and multiple relation extraction, and achieves
state-of-the-art performance on two public datasets.
Related papers
- Entity or Relation Embeddings? An Analysis of Encoding Strategies for Relation Extraction [19.019881161010474]
Relation extraction is essentially a text classification problem, which can be tackled by fine-tuning a pre-trained language model (LM)
Existing approaches therefore solve the problem in an indirect way: they fine-tune an LM to learn embeddings of the head and tail entities, and then predict the relationship from these entity embeddings.
Our hypothesis in this paper is that relation extraction models can be improved by capturing relationships in a more direct way.
arXiv Detail & Related papers (2023-12-18T09:58:19Z) - Learning Complete Topology-Aware Correlations Between Relations for Inductive Link Prediction [121.65152276851619]
We show that semantic correlations between relations are inherently edge-level and entity-independent.
We propose a novel subgraph-based method, namely TACO, to model Topology-Aware COrrelations between relations.
To further exploit the potential of RCN, we propose Complete Common Neighbor induced subgraph.
arXiv Detail & Related papers (2023-09-20T08:11:58Z) - Open Set Relation Extraction via Unknown-Aware Training [72.10462476890784]
We propose an unknown-aware training method, regularizing the model by dynamically synthesizing negative instances.
Inspired by text adversarial attacks, we adaptively apply small but critical perturbations to original training instances.
Experimental results show that this method achieves SOTA unknown relation detection without compromising the classification of known relations.
arXiv Detail & Related papers (2023-06-08T05:45:25Z) - Document-level Relation Extraction with Relation Correlations [15.997345900917058]
Document-level relation extraction faces two overlooked challenges: long-tail problem and multi-label problem.
We analyze the co-occurrence correlation of relations, and introduce it into DocRE task for the first time.
arXiv Detail & Related papers (2022-12-20T11:17:52Z) - Extracting all Aspect-polarity Pairs Jointly in a Text with Relation
Extraction Approach [6.844982778392037]
We propose to generate aspect-polarity pairs directly from a text with relation extraction technology.
We present a position- and aspect-aware sequence2sequence model for joint extraction of aspect-polarity pairs.
arXiv Detail & Related papers (2021-09-01T09:00:39Z) - Link Prediction on N-ary Relational Data Based on Relatedness Evaluation [61.61555159755858]
We propose a method called NaLP to conduct link prediction on n-ary relational data.
We represent each n-ary relational fact as a set of its role and role-value pairs.
Experimental results validate the effectiveness and merits of the proposed methods.
arXiv Detail & Related papers (2021-04-21T09:06:54Z) - Bridging Text and Knowledge with Multi-Prototype Embedding for Few-Shot
Relational Triple Extraction [40.00702385889112]
We propose a novel multi-prototype embedding network model to jointly extract the composition of relational triples.
We design a hybrid learning mechanism that bridges text and knowledge concerning both entities and relations.
Experimental results demonstrate that the proposed method can improve the performance of the few-shot triple extraction.
arXiv Detail & Related papers (2020-10-30T04:18:39Z) - Learning to Decouple Relations: Few-Shot Relation Classification with
Entity-Guided Attention and Confusion-Aware Training [49.9995628166064]
We propose CTEG, a model equipped with two mechanisms to learn to decouple easily-confused relations.
On the one hand, an EGA mechanism is introduced to guide the attention to filter out information causing confusion.
On the other hand, a Confusion-Aware Training (CAT) method is proposed to explicitly learn to distinguish relations.
arXiv Detail & Related papers (2020-10-21T11:07:53Z) - Joint Constrained Learning for Event-Event Relation Extraction [94.3499255880101]
We propose a joint constrained learning framework for modeling event-event relations.
Specifically, the framework enforces logical constraints within and across multiple temporal and subevent relations.
We show that our joint constrained learning approach effectively compensates for the lack of jointly labeled data.
arXiv Detail & Related papers (2020-10-13T22:45:28Z) - Cross-Supervised Joint-Event-Extraction with Heterogeneous Information
Networks [61.950353376870154]
Joint-event-extraction is a sequence-to-sequence labeling task with a tag set composed of tags of triggers and entities.
We propose a Cross-Supervised Mechanism (CSM) to alternately supervise the extraction of triggers or entities.
Our approach outperforms the state-of-the-art methods in both entity and trigger extraction.
arXiv Detail & Related papers (2020-10-13T11:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.