Synergistic Anchored Contrastive Pre-training for Few-Shot Relation
Extraction
- URL: http://arxiv.org/abs/2312.12021v3
- Date: Mon, 11 Mar 2024 14:42:28 GMT
- Title: Synergistic Anchored Contrastive Pre-training for Few-Shot Relation
Extraction
- Authors: Da Luo, Yanglei Gan, Rui Hou, Run Lin, Qiao Liu, Yuxiang Cai, Wannian
Gao
- Abstract summary: Few-shot Relation Extraction (FSRE) aims to extract facts from a sparse set of labeled corpora.
Recent studies have shown promising results in FSRE by employing Pre-trained Language Models.
We introduce a novel synergistic anchored contrastive pre-training framework.
- Score: 4.7220779071424985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot Relation Extraction (FSRE) aims to extract relational facts from a
sparse set of labeled corpora. Recent studies have shown promising results in
FSRE by employing Pre-trained Language Models (PLMs) within the framework of
supervised contrastive learning, which considers both instances and label
facts. However, how to effectively harness massive instance-label pairs to
encompass the learned representation with semantic richness in this learning
paradigm is not fully explored. To address this gap, we introduce a novel
synergistic anchored contrastive pre-training framework. This framework is
motivated by the insight that the diverse viewpoints conveyed through
instance-label pairs capture incomplete yet complementary intrinsic textual
semantics. Specifically, our framework involves a symmetrical contrastive
objective that encompasses both sentence-anchored and label-anchored
contrastive losses. By combining these two losses, the model establishes a
robust and uniform representation space. This space effectively captures the
reciprocal alignment of feature distributions among instances and relational
facts, simultaneously enhancing the maximization of mutual information across
diverse perspectives within the same relation. Experimental results demonstrate
that our framework achieves significant performance enhancements compared to
baseline models in downstream FSRE tasks. Furthermore, our approach exhibits
superior adaptability to handle the challenges of domain shift and zero-shot
relation extraction. Our code is available online at
https://github.com/AONE-NLP/FSRE-SaCon.
Related papers
- Advancing Semantic Textual Similarity Modeling: A Regression Framework with Translated ReLU and Smooth K2 Loss [3.435381469869212]
This paper presents an innovative regression framework for Sentence-BERT STS tasks.
It proposes two simple yet effective loss functions: Translated ReLU and Smooth K2 Loss.
Experimental results demonstrate that our method achieves convincing performance across seven established STS benchmarks.
arXiv Detail & Related papers (2024-06-08T02:52:43Z) - Deep Boosting Learning: A Brand-new Cooperative Approach for Image-Text Matching [53.05954114863596]
We propose a brand-new Deep Boosting Learning (DBL) algorithm for image-text matching.
An anchor branch is first trained to provide insights into the data properties.
A target branch is concurrently tasked with more adaptive margin constraints to further enlarge the relative distance between matched and unmatched samples.
arXiv Detail & Related papers (2024-04-28T08:44:28Z) - Towards Distribution-Agnostic Generalized Category Discovery [51.52673017664908]
Data imbalance and open-ended distribution are intrinsic characteristics of the real visual world.
We propose a Self-Balanced Co-Advice contrastive framework (BaCon)
BaCon consists of a contrastive-learning branch and a pseudo-labeling branch, working collaboratively to provide interactive supervision to resolve the DA-GCD task.
arXiv Detail & Related papers (2023-10-02T17:39:58Z) - Siamese Representation Learning for Unsupervised Relation Extraction [5.776369192706107]
Unsupervised relation extraction (URE) aims at discovering underlying relations between named entity pairs from open-domain plain text.
Existing URE models utilizing contrastive learning, which attract positive samples and repulse negative samples to promote better separation, have got decent effect.
We propose Siamese Representation Learning for Unsupervised Relation Extraction -- a novel framework to simply leverage positive pairs to representation learning.
arXiv Detail & Related papers (2023-10-01T02:57:43Z) - Identical and Fraternal Twins: Fine-Grained Semantic Contrastive
Learning of Sentence Representations [6.265789210037749]
We introduce a novel Identical and Fraternal Twins of Contrastive Learning framework, capable of simultaneously adapting to various positive pairs generated by different augmentation techniques.
We also present proof-of-concept experiments combined with the contrastive objective to prove the validity of the proposed Twins Loss.
arXiv Detail & Related papers (2023-07-20T15:02:42Z) - Beyond Instance Discrimination: Relation-aware Contrastive
Self-supervised Learning [75.46664770669949]
We present relation-aware contrastive self-supervised learning (ReCo) to integrate instance relations.
Our ReCo consistently gains remarkable performance improvements.
arXiv Detail & Related papers (2022-11-02T03:25:28Z) - HiURE: Hierarchical Exemplar Contrastive Learning for Unsupervised
Relation Extraction [60.80849503639896]
Unsupervised relation extraction aims to extract the relationship between entities from natural language sentences without prior information on relational scope or distribution.
We propose a novel contrastive learning framework named HiURE, which has the capability to derive hierarchical signals from relational feature space using cross hierarchy attention.
Experimental results on two public datasets demonstrate the advanced effectiveness and robustness of HiURE on unsupervised relation extraction when compared with state-of-the-art models.
arXiv Detail & Related papers (2022-05-04T17:56:48Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Unpaired Adversarial Learning for Single Image Deraining with Rain-Space
Contrastive Constraints [61.40893559933964]
We develop an effective unpaired SID method which explores mutual properties of the unpaired exemplars by a contrastive learning manner in a GAN framework, named as CDR-GAN.
Our method performs favorably against existing unpaired deraining approaches on both synthetic and real-world datasets, even outperforms several fully-supervised or semi-supervised models.
arXiv Detail & Related papers (2021-09-07T10:00:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.