Relation Extraction with Weighted Contrastive Pre-training on Distant
Supervision
- URL: http://arxiv.org/abs/2205.08770v1
- Date: Wed, 18 May 2022 07:45:59 GMT
- Title: Relation Extraction with Weighted Contrastive Pre-training on Distant
Supervision
- Authors: Zhen Wan, Fei Cheng, Qianying Liu, Zhuoyuan Mao, Haiyue Song, Sadao
Kurohashi
- Abstract summary: We propose a weighted contrastive learning method by leveraging the supervised data to estimate the reliability of pre-training instances.
Experimental results on three supervised datasets demonstrate the advantages of our proposed weighted contrastive learning approach.
- Score: 22.904752492573504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive pre-training on distant supervision has shown remarkable
effectiveness for improving supervised relation extraction tasks. However, the
existing methods ignore the intrinsic noise of distant supervision during the
pre-training stage. In this paper, we propose a weighted contrastive learning
method by leveraging the supervised data to estimate the reliability of
pre-training instances and explicitly reduce the effect of noise. Experimental
results on three supervised datasets demonstrate the advantages of our proposed
weighted contrastive learning approach, compared to two state-of-the-art
non-weighted baselines.
Related papers
- Denoising Pre-Training and Customized Prompt Learning for Efficient Multi-Behavior Sequential Recommendation [69.60321475454843]
We propose DPCPL, the first pre-training and prompt-tuning paradigm tailored for Multi-Behavior Sequential Recommendation.
In the pre-training stage, we propose a novel Efficient Behavior Miner (EBM) to filter out the noise at multiple time scales.
Subsequently, we propose to tune the pre-trained model in a highly efficient manner with the proposed Customized Prompt Learning (CPL) module.
arXiv Detail & Related papers (2024-08-21T06:48:38Z) - A Double Machine Learning Approach to Combining Experimental and Observational Data [59.29868677652324]
We propose a double machine learning approach to combine experimental and observational studies.
Our framework tests for violations of external validity and ignorability under milder assumptions.
arXiv Detail & Related papers (2023-07-04T02:53:11Z) - Sparsity-Aware Optimal Transport for Unsupervised Restoration Learning [17.098664719423404]
In this paper, we exploit the sparsity of degradation in the unsupervised restoration learning framework to significantly boost its performance on complex restoration tasks.
Experiments on real-world super-resolution, deraining, and dehazing demonstrate that SOT can improve the PSNR of OT by about 2.6 dB, 2.7 dB and 1.3 dB, respectively.
arXiv Detail & Related papers (2023-04-29T15:09:48Z) - Sample Efficient Deep Reinforcement Learning via Uncertainty Estimation [12.415463205960156]
In model-free deep reinforcement learning (RL) algorithms, using noisy value estimates to supervise policy evaluation and optimization is detrimental to the sample efficiency.
We provide a systematic analysis of the sources of uncertainty in the noisy supervision that occurs in RL.
We propose a method whereby two complementary uncertainty estimation methods account for both the Q-value and the environmentity to better mitigate the negative impacts of noisy supervision.
arXiv Detail & Related papers (2022-01-05T15:46:06Z) - Improve Unsupervised Pretraining for Few-label Transfer [80.58625921631506]
In this paper, we find this conclusion may not hold when the target dataset has very few labeled samples for finetuning.
We propose a new progressive few-label transfer algorithm for real applications.
arXiv Detail & Related papers (2021-07-26T17:59:56Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Making Attention Mechanisms More Robust and Interpretable with Virtual
Adversarial Training for Semi-Supervised Text Classification [9.13755431537592]
We propose a new general training technique for attention mechanisms based on virtual adversarial training (VAT)
VAT can compute adversarial perturbations from unlabeled data in a semi-supervised setting for the attention mechanisms that have been reported in previous studies to be vulnerable to perturbations.
arXiv Detail & Related papers (2021-04-18T07:51:45Z) - Disambiguation of weak supervision with exponential convergence rates [88.99819200562784]
In supervised learning, data are annotated with incomplete yet discriminative information.
In this paper, we focus on partial labelling, an instance of weak supervision where, from a given input, we are given a set of potential targets.
We propose an empirical disambiguation algorithm to recover full supervision from weak supervision.
arXiv Detail & Related papers (2021-02-04T18:14:32Z) - Robust Pre-Training by Adversarial Contrastive Learning [120.33706897927391]
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
arXiv Detail & Related papers (2020-10-26T04:44:43Z) - Supervision Accelerates Pre-training in Contrastive Semi-Supervised
Learning of Visual Representations [12.755943669814236]
We propose a semi-supervised loss, SuNCEt, that aims to distinguish examples of different classes in addition to self-supervised instance-wise pretext tasks.
On ImageNet, we find that SuNCEt can be used to match the semi-supervised learning accuracy of previous contrastive approaches.
Our main insight is that leveraging even a small amount of labeled data during pre-training, and not only during fine-tuning, provides an important signal.
arXiv Detail & Related papers (2020-06-18T18:44:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.