Sentence-Level Relation Extraction via Contrastive Learning with
Descriptive Relation Prompts
- URL: http://arxiv.org/abs/2304.04935v1
- Date: Tue, 11 Apr 2023 02:15:13 GMT
- Title: Sentence-Level Relation Extraction via Contrastive Learning with
Descriptive Relation Prompts
- Authors: Jiewen Zheng, Ze Chen
- Abstract summary: We propose a new paradigm, Contrastive Learning with Descriptive Relation Prompts(CTL-), to jointly consider entity information, relational knowledge and entity type restrictions.
The CTL- obtains a competitive F1-score of 76.7% on TACRED.
The new presented paradigm achieves F1-scores of 85.8% and 91.6% on TACREV and Re-TACRED respectively, which are both the state-of-the-art performance.
- Score: 1.5736899098702974
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sentence-level relation extraction aims to identify the relation between two
entities for a given sentence. The existing works mostly focus on obtaining a
better entity representation and adopting a multi-label classifier for relation
extraction. A major limitation of these works is that they ignore background
relational knowledge and the interrelation between entity types and candidate
relations. In this work, we propose a new paradigm, Contrastive Learning with
Descriptive Relation Prompts(CTL-DRP), to jointly consider entity information,
relational knowledge and entity type restrictions. In particular, we introduce
an improved entity marker and descriptive relation prompts when generating
contextual embedding, and utilize contrastive learning to rank the restricted
candidate relations. The CTL-DRP obtains a competitive F1-score of 76.7% on
TACRED. Furthermore, the new presented paradigm achieves F1-scores of 85.8% and
91.6% on TACREV and Re-TACRED respectively, which are both the state-of-the-art
performance.
Related papers
- RGAT: A Deeper Look into Syntactic Dependency Information for
Coreference Resolution [8.017036537163008]
We propose an end-to-end resolution that combines pre-trained BERT with a Syntactic Relation Graph Attention Network (RGAT)
In particular, the RGAT model is first proposed, then used to understand the syntactic dependency graph and learn better task-specific syntactic embeddings.
An integrated architecture incorporating BERT embeddings and syntactic embeddings is constructed to generate blending representations for the downstream task.
arXiv Detail & Related papers (2023-09-10T09:46:38Z) - HIORE: Leveraging High-order Interactions for Unified Entity Relation
Extraction [85.80317530027212]
We propose HIORE, a new method for unified entity relation extraction.
The key insight is to leverage the complex association among word pairs, which contains richer information than the first-order word-by-word interactions.
Experiments show that HIORE achieves the state-of-the-art performance on relation extraction and an improvement of 1.11.8 F1 points over the prior best unified model.
arXiv Detail & Related papers (2023-05-07T14:57:42Z) - FECANet: Boosting Few-Shot Semantic Segmentation with Feature-Enhanced
Context-Aware Network [48.912196729711624]
Few-shot semantic segmentation is the task of learning to locate each pixel of a novel class in a query image with only a few annotated support images.
We propose a Feature-Enhanced Context-Aware Network (FECANet) to suppress the matching noise caused by inter-class local similarity.
In addition, we propose a novel correlation reconstruction module that encodes extra correspondence relations between foreground and background and multi-scale context semantic features.
arXiv Detail & Related papers (2023-01-19T16:31:13Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - D-REX: Dialogue Relation Extraction with Explanations [65.3862263565638]
This work focuses on extracting explanations that indicate that a relation exists while using only partially labeled data.
We propose our model-agnostic framework, D-REX, a policy-guided semi-supervised algorithm that explains and ranks relations.
We find that about 90% of the time, human annotators prefer D-REX's explanations over a strong BERT-based joint relation extraction and explanation model.
arXiv Detail & Related papers (2021-09-10T22:30:48Z) - Distantly Supervised Relation Extraction via Recursive
Hierarchy-Interactive Attention and Entity-Order Perception [3.8651116146455533]
In a sentence, the appearance order of two entities contributes to the understanding of its semantics.
We introduce a newfangled training objective, called Entity-Order Perception (EOP), to make the sentence encoder retain more entity appearance information.
Our approach achieves state-of-the-art performance in terms of precision-recall (P-R) curves, AUC, Top-N precision and other evaluation metrics.
arXiv Detail & Related papers (2021-05-18T00:45:25Z) - ZS-BERT: Towards Zero-Shot Relation Extraction with Attribute
Representation Learning [10.609715843964263]
We formulate the zero-shot relation extraction problem by incorporating the text description of seen and unseen relations.
We propose a novel multi-task learning model, zero-shot BERT, to directly predict unseen relations without hand-crafted labeling and multiple pairwise attribute classifications.
Experiments conducted on two well-known datasets exhibit that ZS-BERT can outperform existing methods by at least 13.54% improvement on F1 score.
arXiv Detail & Related papers (2021-04-10T06:53:41Z) - Learning Relation Prototype from Unlabeled Texts for Long-tail Relation
Extraction [84.64435075778988]
We propose a general approach to learn relation prototypes from unlabeled texts.
We learn relation prototypes as an implicit factor between entities.
We conduct experiments on two publicly available datasets: New York Times and Google Distant Supervision.
arXiv Detail & Related papers (2020-11-27T06:21:12Z) - A Frustratingly Easy Approach for Entity and Relation Extraction [25.797992240847833]
We present a simple pipelined approach for entity and relation extraction.
We establish the new state-of-the-art on standard benchmarks (ACE04, ACE05 and SciERC)
Our approach essentially builds on two independent encoders and merely uses the entity model to construct the input for the relation model.
arXiv Detail & Related papers (2020-10-24T07:14:01Z) - Learning to Decouple Relations: Few-Shot Relation Classification with
Entity-Guided Attention and Confusion-Aware Training [49.9995628166064]
We propose CTEG, a model equipped with two mechanisms to learn to decouple easily-confused relations.
On the one hand, an EGA mechanism is introduced to guide the attention to filter out information causing confusion.
On the other hand, a Confusion-Aware Training (CAT) method is proposed to explicitly learn to distinguish relations.
arXiv Detail & Related papers (2020-10-21T11:07:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.