Enhancing Contrastive Learning with Noise-Guided Attack: Towards
Continual Relation Extraction in the Wild
- URL: http://arxiv.org/abs/2305.07085v1
- Date: Thu, 11 May 2023 18:48:18 GMT
- Title: Enhancing Contrastive Learning with Noise-Guided Attack: Towards
Continual Relation Extraction in the Wild
- Authors: Ting Wu, Jingyi Liu, Rui Zheng, Qi Zhang, Tao Gui, Xuanjing Huang
- Abstract summary: We develop a noise-resistant contrastive framework named as textbfNoise-guided textbfattack in textbfContrative textbfLearning(NaCL)
Compared to direct noise discarding or inaccessible noise relabeling, we present modifying the feature space to match the given noisy labels via attacking.
- Score: 57.468184469589744
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The principle of continual relation extraction~(CRE) involves adapting to
emerging novel relations while preserving od knowledge. While current endeavors
in CRE succeed in preserving old knowledge, they tend to fail when exposed to
contaminated data streams. We assume this is attributed to their reliance on an
artificial hypothesis that the data stream has no annotation errors, which
hinders real-world applications for CRE. Considering the ubiquity of noisy
labels in real-world datasets, in this paper, we formalize a more practical
learning scenario, termed as \textit{noisy-CRE}. Building upon this challenging
setting, we develop a noise-resistant contrastive framework named as
\textbf{N}oise-guided \textbf{a}ttack in \textbf{C}ontrative
\textbf{L}earning~(NaCL) to learn incremental corrupted relations. Compared to
direct noise discarding or inaccessible noise relabeling, we present modifying
the feature space to match the given noisy labels via attacking can better
enrich contrastive representations. Extensive empirical validations highlight
that NaCL can achieve consistent performance improvements with increasing noise
rates, outperforming state-of-the-art baselines.
Related papers
- Label Noise: Ignorance Is Bliss [20.341746708177055]
We establish a new theoretical framework for learning under multi-class, instance-dependent label noise.
Our findings support the simple emphNoise Ignorant Empirical Risk Minimization (NI-ERM) principle, which minimizes empirical risk while ignoring label noise.
arXiv Detail & Related papers (2024-10-31T17:03:25Z) - Disentangled Noisy Correspondence Learning [56.06801962154915]
Cross-modal retrieval is crucial in understanding latent correspondences across modalities.
DisNCL is a novel information-theoretic framework for feature Disentanglement in Noisy Correspondence Learning.
arXiv Detail & Related papers (2024-08-10T09:49:55Z) - NoisyAG-News: A Benchmark for Addressing Instance-Dependent Noise in Text Classification [7.464154519547575]
Existing research on learning with noisy labels predominantly focuses on synthetic noise patterns.
We constructed a benchmark dataset to better understand label noise in real-world text classification settings.
Our findings reveal that while pre-trained models are resilient to synthetic noise, they struggle against instance-dependent noise.
arXiv Detail & Related papers (2024-07-09T06:18:40Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - ERASE: Error-Resilient Representation Learning on Graphs for Label Noise
Tolerance [53.73316938815873]
We propose a method called ERASE (Error-Resilient representation learning on graphs for lAbel noiSe tolerancE) to learn representations with error tolerance.
ERASE combines prototype pseudo-labels with propagated denoised labels and updates representations with error resilience.
Our method can outperform multiple baselines with clear margins in broad noise levels and enjoy great scalability.
arXiv Detail & Related papers (2023-12-13T17:59:07Z) - Label Noise: Correcting the Forward-Correction [0.0]
Training neural network classifiers on datasets with label noise poses a risk of overfitting them to the noisy labels.
We propose an approach to tackling overfitting caused by label noise.
Motivated by this observation, we propose imposing a lower bound on the training loss to mitigate overfitting.
arXiv Detail & Related papers (2023-07-24T19:41:19Z) - How to Enhance Causal Discrimination of Utterances: A Case on Affective
Reasoning [22.11437627661179]
We propose the incorporation of textiti.i.i.d. noise terms into the conversation process, thereby constructing a structural causal model (SCM)
To facilitate the implementation of deep learning, we introduce the cogn frameworks to handle unstructured conversation data, and employ an autoencoder architecture to regard the unobservable noise as learnable "implicit causes"
arXiv Detail & Related papers (2023-05-04T07:45:49Z) - Learning with Noisy Labels Revisited: A Study Using Real-World Human
Annotations [54.400167806154535]
Existing research on learning with noisy labels mainly focuses on synthetic label noise.
This work presents two new benchmark datasets (CIFAR-10N, CIFAR-100N)
We show that real-world noisy labels follow an instance-dependent pattern rather than the classically adopted class-dependent ones.
arXiv Detail & Related papers (2021-10-22T22:42:11Z) - Open-set Label Noise Can Improve Robustness Against Inherent Label Noise [27.885927200376386]
We show that open-set noisy labels can be non-toxic and even benefit the robustness against inherent noisy labels.
We propose a simple yet effective regularization by introducing Open-set samples with Dynamic Noisy Labels (ODNL) into training.
arXiv Detail & Related papers (2021-06-21T07:15:50Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.