RAPL: A Relation-Aware Prototype Learning Approach for Few-Shot
Document-Level Relation Extraction
- URL: http://arxiv.org/abs/2310.15743v1
- Date: Tue, 24 Oct 2023 11:35:23 GMT
- Title: RAPL: A Relation-Aware Prototype Learning Approach for Few-Shot
Document-Level Relation Extraction
- Authors: Shiao Meng, Xuming Hu, Aiwei Liu, Shu'ang Li, Fukun Ma, Yawen Yang,
Lijie Wen
- Abstract summary: We propose a relation-aware prototype learning method for FSDLRE.
Our method effectively refines the relation prototypes and generates task-specific NOTA prototypes.
- Score: 35.246592734300414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How to identify semantic relations among entities in a document when only a
few labeled documents are available? Few-shot document-level relation
extraction (FSDLRE) is crucial for addressing the pervasive data scarcity
problem in real-world scenarios. Metric-based meta-learning is an effective
framework widely adopted for FSDLRE, which constructs class prototypes for
classification. However, existing works often struggle to obtain class
prototypes with accurate relational semantics: 1) To build prototype for a
target relation type, they aggregate the representations of all entity pairs
holding that relation, while these entity pairs may also hold other relations,
thus disturbing the prototype. 2) They use a set of generic NOTA
(none-of-the-above) prototypes across all tasks, neglecting that the NOTA
semantics differs in tasks with different target relation types. In this paper,
we propose a relation-aware prototype learning method for FSDLRE to strengthen
the relational semantics of prototype representations. By judiciously
leveraging the relation descriptions and realistic NOTA instances as guidance,
our method effectively refines the relation prototypes and generates
task-specific NOTA prototypes. Extensive experiments demonstrate that our
method outperforms state-of-the-art approaches by average 2.61% $F_1$ across
various settings of two FSDLRE benchmarks.
Related papers
- Generative Retrieval Meets Multi-Graded Relevance [104.75244721442756]
We introduce a framework called GRaded Generative Retrieval (GR$2$)
GR$2$ focuses on two key components: ensuring relevant and distinct identifiers, and implementing multi-graded constrained contrastive training.
Experiments on datasets with both multi-graded and binary relevance demonstrate the effectiveness of GR$2$.
arXiv Detail & Related papers (2024-09-27T02:55:53Z) - A Semantic Mention Graph Augmented Model for Document-Level Event Argument Extraction [12.286432133599355]
Document-level Event Argument Extraction (DEAE) aims to identify arguments and their specific roles from an unstructured document.
advanced approaches on DEAE utilize prompt-based methods to guide pre-trained language models (PLMs) in extracting arguments from input documents.
We propose a semantic mention Graph Augmented Model (GAM) to address these two problems in this paper.
arXiv Detail & Related papers (2024-03-12T08:58:07Z) - ProtoEM: A Prototype-Enhanced Matching Framework for Event Relation
Extraction [69.74158631862652]
Event Relation Extraction (ERE) aims to extract multiple kinds of relations among events in texts.
Existing methods singly categorize event relations as different classes, which are inadequately capturing the intrinsic semantics of these relations.
We propose a Prototype-Enhanced Matching (ProtoEM) framework for the joint extraction of multiple kinds of event relations.
arXiv Detail & Related papers (2023-09-22T14:26:06Z) - Prototype-based Embedding Network for Scene Graph Generation [105.97836135784794]
Current Scene Graph Generation (SGG) methods explore contextual information to predict relationships among entity pairs.
Due to the diverse visual appearance of numerous possible subject-object combinations, there is a large intra-class variation within each predicate category.
Prototype-based Embedding Network (PE-Net) models entities/predicates with prototype-aligned compact and distinctive representations.
PL is introduced to help PE-Net efficiently learn such entitypredicate matching, and Prototype Regularization (PR) is devised to relieve the ambiguous entity-predicate matching.
arXiv Detail & Related papers (2023-03-13T13:30:59Z) - A Prototypical Semantic Decoupling Method via Joint Contrastive Learning
for Few-Shot Name Entity Recognition [24.916377682689955]
Few-shot named entity recognition (NER) aims at identifying named entities based on only few labeled instances.
We propose a Prototypical Semantic Decoupling method via joint Contrastive learning (PSDC) for few-shot NER.
Experimental results on two few-shot NER benchmarks demonstrate that PSDC consistently outperforms the previous SOTA methods in terms of overall performance.
arXiv Detail & Related papers (2023-02-27T09:20:00Z) - A Novel Few-Shot Relation Extraction Pipeline Based on Adaptive
Prototype Fusion [5.636675879040131]
Few-shot relation extraction (FSRE) aims at recognizing unseen relations by learning with merely a handful of annotated instances.
This paper proposes a novel pipeline for the FSRE task based on adaptive prototype fusion.
Experiments on the benchmark dataset FewRel 1.0 show a significant improvement of our method against state-of-the-art methods.
arXiv Detail & Related papers (2022-10-15T09:44:21Z) - Document-Level Relation Extraction with Sentences Importance Estimation
and Focusing [52.069206266557266]
Document-level relation extraction (DocRE) aims to determine the relation between two entities from a document of multiple sentences.
We propose a Sentence Estimation and Focusing (SIEF) framework for DocRE, where we design a sentence importance score and a sentence focusing loss.
Experimental results on two domains show that our SIEF not only improves overall performance, but also makes DocRE models more robust.
arXiv Detail & Related papers (2022-04-27T03:20:07Z) - Prototypical Representation Learning for Relation Extraction [56.501332067073065]
This paper aims to learn predictive, interpretable, and robust relation representations from distantly-labeled data.
We learn prototypes for each relation from contextual information to best explore the intrinsic semantics of relations.
Results on several relation learning tasks show that our model significantly outperforms the previous state-of-the-art relational models.
arXiv Detail & Related papers (2021-03-22T08:11:43Z) - Learning Relation Prototype from Unlabeled Texts for Long-tail Relation
Extraction [84.64435075778988]
We propose a general approach to learn relation prototypes from unlabeled texts.
We learn relation prototypes as an implicit factor between entities.
We conduct experiments on two publicly available datasets: New York Times and Google Distant Supervision.
arXiv Detail & Related papers (2020-11-27T06:21:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.