Learning with Noisy Labels for Robust Point Cloud Segmentation
- URL: http://arxiv.org/abs/2107.14230v1
- Date: Thu, 29 Jul 2021 17:59:54 GMT
- Title: Learning with Noisy Labels for Robust Point Cloud Segmentation
- Authors: Shuquan Ye and Dongdong Chen and Songfang Han and Jing Liao
- Abstract summary: Object class labels are often mislabeled in real-world point cloud datasets.
We propose a novel Point Noise-Adaptive Learning (PNAL) framework.
We conduct extensive experiments to demonstrate the effectiveness of PNAL on both synthetic and real-world noisy datasets.
- Score: 22.203927159777123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point cloud segmentation is a fundamental task in 3D. Despite recent progress
on point cloud segmentation with the power of deep networks, current deep
learning methods based on the clean label assumptions may fail with noisy
labels. Yet, object class labels are often mislabeled in real-world point cloud
datasets. In this work, we take the lead in solving this issue by proposing a
novel Point Noise-Adaptive Learning (PNAL) framework. Compared to existing
noise-robust methods on image tasks, our PNAL is noise-rate blind, to cope with
the spatially variant noise rate problem specific to point clouds.
Specifically, we propose a novel point-wise confidence selection to obtain
reliable labels based on the historical predictions of each point. A novel
cluster-wise label correction is proposed with a voting strategy to generate
the best possible label taking the neighbor point correlations into
consideration. We conduct extensive experiments to demonstrate the
effectiveness of PNAL on both synthetic and real-world noisy datasets. In
particular, even with $60\%$ symmetric noisy labels, our proposed method
produces much better results than its baseline counterpart without PNAL and is
comparable to the ideal upper bound trained on a completely clean dataset.
Moreover, we fully re-labeled the test set of a popular but noisy real-world
scene dataset ScanNetV2 to make it clean, for rigorous experiment and future
research. Our code and data will be available at
\url{https://shuquanye.com/PNAL_website/}.
Related papers
- Extracting Clean and Balanced Subset for Noisy Long-tailed Classification [66.47809135771698]
We develop a novel pseudo labeling method using class prototypes from the perspective of distribution matching.
By setting a manually-specific probability measure, we can reduce the side-effects of noisy and long-tailed data simultaneously.
Our method can extract this class-balanced subset with clean labels, which brings effective performance gains for long-tailed classification with label noise.
arXiv Detail & Related papers (2024-04-10T07:34:37Z) - Group Benefits Instances Selection for Data Purification [21.977432359384835]
Existing methods for combating label noise are typically designed and tested on synthetic datasets.
We propose a method named GRIP to alleviate the noisy label problem for both synthetic and real-world datasets.
arXiv Detail & Related papers (2024-03-23T03:06:19Z) - Combating Label Noise With A General Surrogate Model For Sample
Selection [84.61367781175984]
We propose to leverage the vision-language surrogate model CLIP to filter noisy samples automatically.
We validate the effectiveness of our proposed method on both real-world and synthetic noisy datasets.
arXiv Detail & Related papers (2023-10-16T14:43:27Z) - Rethinking Noisy Label Learning in Real-world Annotation Scenarios from
the Noise-type Perspective [38.24239397999152]
We propose a novel sample selection-based approach for noisy label learning, called Proto-semi.
Proto-semi divides all samples into the confident and unconfident datasets via warm-up.
By leveraging the confident dataset, prototype vectors are constructed to capture class characteristics.
Empirical evaluations on a real-world annotated dataset substantiate the robustness of Proto-semi in handling the problem of learning from noisy labels.
arXiv Detail & Related papers (2023-07-28T10:57:38Z) - Robust Point Cloud Segmentation with Noisy Annotations [32.991219357321334]
Class labels are often mislabeled at both instance-level and boundary-level in real-world datasets.
We take the lead in solving the instance-level label noise by proposing a Point Noise-Adaptive Learning framework.
Our framework significantly outperforms its baselines, and is comparable to the upper bound trained on completely clean data.
arXiv Detail & Related papers (2022-12-06T18:59:58Z) - Neighborhood Collective Estimation for Noisy Label Identification and
Correction [92.20697827784426]
Learning with noisy labels (LNL) aims at designing strategies to improve model performance and generalization by mitigating the effects of model overfitting to noisy labels.
Recent advances employ the predicted label distributions of individual samples to perform noise verification and noisy label correction, easily giving rise to confirmation bias.
We propose Neighborhood Collective Estimation, in which the predictive reliability of a candidate sample is re-estimated by contrasting it against its feature-space nearest neighbors.
arXiv Detail & Related papers (2022-08-05T14:47:22Z) - Towards Harnessing Feature Embedding for Robust Learning with Noisy
Labels [44.133307197696446]
The memorization effect of deep neural networks (DNNs) plays a pivotal role in recent label noise learning methods.
We propose a novel feature embedding-based method for deep learning with label noise, termed LabEl NoiseDilution (LEND)
arXiv Detail & Related papers (2022-06-27T02:45:09Z) - Robust Meta-learning with Sampling Noise and Label Noise via
Eigen-Reptile [78.1212767880785]
meta-learner is prone to overfitting since there are only a few available samples.
When handling the data with noisy labels, the meta-learner could be extremely sensitive to label noise.
We present Eigen-Reptile (ER) that updates the meta- parameters with the main direction of historical task-specific parameters.
arXiv Detail & Related papers (2022-06-04T08:48:02Z) - S3: Supervised Self-supervised Learning under Label Noise [53.02249460567745]
In this paper we address the problem of classification in the presence of label noise.
In the heart of our method is a sample selection mechanism that relies on the consistency between the annotated label of a sample and the distribution of the labels in its neighborhood in the feature space.
Our method significantly surpasses previous methods on both CIFARCIFAR100 with artificial noise and real-world noisy datasets such as WebVision and ANIMAL-10N.
arXiv Detail & Related papers (2021-11-22T15:49:20Z) - Learning with Noisy Labels Revisited: A Study Using Real-World Human
Annotations [54.400167806154535]
Existing research on learning with noisy labels mainly focuses on synthetic label noise.
This work presents two new benchmark datasets (CIFAR-10N, CIFAR-100N)
We show that real-world noisy labels follow an instance-dependent pattern rather than the classically adopted class-dependent ones.
arXiv Detail & Related papers (2021-10-22T22:42:11Z) - An Ensemble Noise-Robust K-fold Cross-Validation Selection Method for
Noisy Labels [0.9699640804685629]
Large-scale datasets tend to contain mislabeled samples that can be memorized by deep neural networks (DNNs)
We present Ensemble Noise-robust K-fold Cross-Validation Selection (E-NKCVS) to effectively select clean samples from noisy data.
We evaluate our approach on various image and text classification tasks where the labels have been manually corrupted with different noise ratios.
arXiv Detail & Related papers (2021-07-06T02:14:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.