Selective-Supervised Contrastive Learning with Noisy Labels
- URL: http://arxiv.org/abs/2203.04181v1
- Date: Tue, 8 Mar 2022 16:12:08 GMT
- Title: Selective-Supervised Contrastive Learning with Noisy Labels
- Authors: Shikun Li, Xiaobo Xia, Shiming Ge, Tongliang Liu
- Abstract summary: We propose selective-supervised contrastive learning (Sel-CL) to learn robust representations and handle noisy labels.
Specifically, Sel-CL extend supervised contrastive learning (Sup-CL), which is powerful in representation learning, but is degraded when there are noisy labels.
Sel-CL tackles the direct cause of the problem of Sup-CL: noisy pairs built by noisy labels mislead representation learning.
- Score: 73.81900964991092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep networks have strong capacities of embedding data into latent
representations and finishing following tasks. However, the capacities largely
come from high-quality annotated labels, which are expensive to collect. Noisy
labels are more affordable, but result in corrupted representations, leading to
poor generalization performance. To learn robust representations and handle
noisy labels, we propose selective-supervised contrastive learning (Sel-CL) in
this paper. Specifically, Sel-CL extend supervised contrastive learning
(Sup-CL), which is powerful in representation learning, but is degraded when
there are noisy labels. Sel-CL tackles the direct cause of the problem of
Sup-CL. That is, as Sup-CL works in a \textit{pair-wise} manner, noisy pairs
built by noisy labels mislead representation learning. To alleviate the issue,
we select confident pairs out of noisy ones for Sup-CL without knowing noise
rates. In the selection process, by measuring the agreement between learned
representations and given labels, we first identify confident examples that are
exploited to build confident pairs. Then, the representation similarity
distribution in the built confident pairs is exploited to identify more
confident pairs out of noisy pairs. All obtained confident pairs are finally
used for Sup-CL to enhance representations. Experiments on multiple noisy
datasets demonstrate the robustness of the learned representations by our
method, following the state-of-the-art performance. Source codes are available
at https://github.com/ShikunLi/Sel-CL
Related papers
- Channel-Wise Contrastive Learning for Learning with Noisy Labels [60.46434734808148]
We introduce channel-wise contrastive learning (CWCL) to distinguish authentic label information from noise.
Unlike conventional instance-wise contrastive learning (IWCL), CWCL tends to yield more nuanced and resilient features aligned with the authentic labels.
Our strategy is twofold: firstly, using CWCL to extract pertinent features to identify cleanly labeled samples, and secondly, progressively fine-tuning using these samples.
arXiv Detail & Related papers (2023-08-14T06:04:50Z) - Adversary-Aware Partial label learning with Label distillation [47.18584755798137]
We present Ad-Aware Partial Label Learning and introduce the $textitrival$, a set of noisy labels, to the collection of candidate labels for each instance.
Our method achieves promising results on the CIFAR10, CIFAR100 and CUB200 datasets.
arXiv Detail & Related papers (2023-04-02T10:18:30Z) - Twin Contrastive Learning with Noisy Labels [45.31997043789471]
We present TCL, a novel twin contrastive learning model to learn robust representations and handle noisy labels for classification.
TCL achieves 7.5% improvements on CIFAR-10 with 90% noisy label -- an extremely noisy scenario.
arXiv Detail & Related papers (2023-03-13T08:53:47Z) - Transductive CLIP with Class-Conditional Contrastive Learning [68.51078382124331]
We propose Transductive CLIP, a novel framework for learning a classification network with noisy labels from scratch.
A class-conditional contrastive learning mechanism is proposed to mitigate the reliance on pseudo labels.
ensemble labels is adopted as a pseudo label updating strategy to stabilize the training of deep neural networks with noisy labels.
arXiv Detail & Related papers (2022-06-13T14:04:57Z) - SELC: Self-Ensemble Label Correction Improves Learning with Noisy Labels [4.876988315151037]
Deep neural networks are prone to overfitting noisy labels, resulting in poor generalization performance.
We present a method self-ensemble label correction (SELC) to progressively correct noisy labels and refine the model.
SELC obtains more promising and stable results in the presence of class-conditional, instance-dependent, and real-world label noise.
arXiv Detail & Related papers (2022-05-02T18:42:47Z) - Trustable Co-label Learning from Multiple Noisy Annotators [68.59187658490804]
Supervised deep learning depends on massive accurately annotated examples.
A typical alternative is learning from multiple noisy annotators.
This paper proposes a data-efficient approach, called emphTrustable Co-label Learning (TCL)
arXiv Detail & Related papers (2022-03-08T16:57:00Z) - Instance-dependent Label-noise Learning under a Structural Causal Model [92.76400590283448]
Label noise will degenerate the performance of deep learning algorithms.
By leveraging a structural causal model, we propose a novel generative approach for instance-dependent label-noise learning.
arXiv Detail & Related papers (2021-09-07T10:42:54Z) - A Second-Order Approach to Learning with Instance-Dependent Label Noise [58.555527517928596]
The presence of label noise often misleads the training of deep neural networks.
We show that the errors in human-annotated labels are more likely to be dependent on the difficulty levels of tasks.
arXiv Detail & Related papers (2020-12-22T06:36:58Z) - Combining Self-Supervised and Supervised Learning with Noisy Labels [41.627404715407586]
convolutional neural networks (CNNs) can easily overfit noisy labels.
It has been a great challenge to train CNNs against them robustly.
arXiv Detail & Related papers (2020-11-16T18:13:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.