Towards Harnessing Feature Embedding for Robust Learning with Noisy
Labels
- URL: http://arxiv.org/abs/2206.13025v1
- Date: Mon, 27 Jun 2022 02:45:09 GMT
- Title: Towards Harnessing Feature Embedding for Robust Learning with Noisy
Labels
- Authors: Chuang Zhang, Li Shen, Jian Yang, Chen Gong
- Abstract summary: The memorization effect of deep neural networks (DNNs) plays a pivotal role in recent label noise learning methods.
We propose a novel feature embedding-based method for deep learning with label noise, termed LabEl NoiseDilution (LEND)
- Score: 44.133307197696446
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The memorization effect of deep neural networks (DNNs) plays a pivotal role
in recent label noise learning methods. To exploit this effect, the model
prediction-based methods have been widely adopted, which aim to exploit the
outputs of DNNs in the early stage of learning to correct noisy labels.
However, we observe that the model will make mistakes during label prediction,
resulting in unsatisfactory performance. By contrast, the produced features in
the early stage of learning show better robustness. Inspired by this
observation, in this paper, we propose a novel feature embedding-based method
for deep learning with label noise, termed LabEl NoiseDilution (LEND). To be
specific, we first compute a similarity matrix based on current embedded
features to capture the local structure of training data. Then, the noisy
supervision signals carried by mislabeled data are overwhelmed by nearby
correctly labeled ones (\textit{i.e.}, label noise dilution), of which the
effectiveness is guaranteed by the inherent robustness of feature embedding.
Finally, the training data with diluted labels are further used to train a
robust classifier. Empirically, we conduct extensive experiments on both
synthetic and real-world noisy datasets by comparing our LEND with several
representative robust learning approaches. The results verify the effectiveness
of our LEND.
Related papers
- Extracting Clean and Balanced Subset for Noisy Long-tailed Classification [66.47809135771698]
We develop a novel pseudo labeling method using class prototypes from the perspective of distribution matching.
By setting a manually-specific probability measure, we can reduce the side-effects of noisy and long-tailed data simultaneously.
Our method can extract this class-balanced subset with clean labels, which brings effective performance gains for long-tailed classification with label noise.
arXiv Detail & Related papers (2024-04-10T07:34:37Z) - ERASE: Error-Resilient Representation Learning on Graphs for Label Noise
Tolerance [53.73316938815873]
We propose a method called ERASE (Error-Resilient representation learning on graphs for lAbel noiSe tolerancE) to learn representations with error tolerance.
ERASE combines prototype pseudo-labels with propagated denoised labels and updates representations with error resilience.
Our method can outperform multiple baselines with clear margins in broad noise levels and enjoy great scalability.
arXiv Detail & Related papers (2023-12-13T17:59:07Z) - Fine tuning Pre trained Models for Robustness Under Noisy Labels [34.68018860186995]
The presence of noisy labels in a training dataset can significantly impact the performance of machine learning models.
We introduce a novel algorithm called TURN, which robustly and efficiently transfers the prior knowledge of pre-trained models.
arXiv Detail & Related papers (2023-10-24T20:28:59Z) - Rethinking Noisy Label Learning in Real-world Annotation Scenarios from
the Noise-type Perspective [38.24239397999152]
We propose a novel sample selection-based approach for noisy label learning, called Proto-semi.
Proto-semi divides all samples into the confident and unconfident datasets via warm-up.
By leveraging the confident dataset, prototype vectors are constructed to capture class characteristics.
Empirical evaluations on a real-world annotated dataset substantiate the robustness of Proto-semi in handling the problem of learning from noisy labels.
arXiv Detail & Related papers (2023-07-28T10:57:38Z) - Robust Long-Tailed Learning under Label Noise [50.00837134041317]
This work investigates the label noise problem under long-tailed label distribution.
We propose a robust framework,algo, that realizes noise detection for long-tailed learning.
Our framework can naturally leverage semi-supervised learning algorithms to further improve the generalisation.
arXiv Detail & Related papers (2021-08-26T03:45:00Z) - INN: A Method Identifying Clean-annotated Samples via Consistency Effect
in Deep Neural Networks [1.1470070927586016]
We introduce a new method called INN to refine clean labeled data from training data with noisy labels.
The INN method requires more computation but is much stable and powerful than the small-loss strategy.
arXiv Detail & Related papers (2021-06-29T09:06:21Z) - Open-set Label Noise Can Improve Robustness Against Inherent Label Noise [27.885927200376386]
We show that open-set noisy labels can be non-toxic and even benefit the robustness against inherent noisy labels.
We propose a simple yet effective regularization by introducing Open-set samples with Dynamic Noisy Labels (ODNL) into training.
arXiv Detail & Related papers (2021-06-21T07:15:50Z) - Noise-resistant Deep Metric Learning with Ranking-based Instance
Selection [59.286567680389766]
We propose a noise-resistant training technique for DML, which we name Probabilistic Ranking-based Instance Selection with Memory (PRISM)
PRISM identifies noisy data in a minibatch using average similarity against image features extracted from several previous versions of the neural network.
To alleviate the high computational cost brought by the memory bank, we introduce an acceleration method that replaces individual data points with the class centers.
arXiv Detail & Related papers (2021-03-30T03:22:17Z) - Learning with Out-of-Distribution Data for Audio Classification [60.48251022280506]
We show that detecting and relabelling certain OOD instances, rather than discarding them, can have a positive effect on learning.
The proposed method is shown to improve the performance of convolutional neural networks by a significant margin.
arXiv Detail & Related papers (2020-02-11T21:08:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.