Robust Point Cloud Segmentation with Noisy Annotations
- URL: http://arxiv.org/abs/2212.03242v1
- Date: Tue, 6 Dec 2022 18:59:58 GMT
- Title: Robust Point Cloud Segmentation with Noisy Annotations
- Authors: Shuquan Ye and Dongdong Chen and Songfang Han and Jing Liao
- Abstract summary: Class labels are often mislabeled at both instance-level and boundary-level in real-world datasets.
We take the lead in solving the instance-level label noise by proposing a Point Noise-Adaptive Learning framework.
Our framework significantly outperforms its baselines, and is comparable to the upper bound trained on completely clean data.
- Score: 32.991219357321334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point cloud segmentation is a fundamental task in 3D. Despite recent progress
on point cloud segmentation with the power of deep networks, current learning
methods based on the clean label assumptions may fail with noisy labels. Yet,
class labels are often mislabeled at both instance-level and boundary-level in
real-world datasets. In this work, we take the lead in solving the
instance-level label noise by proposing a Point Noise-Adaptive Learning (PNAL)
framework. Compared to noise-robust methods on image tasks, our framework is
noise-rate blind, to cope with the spatially variant noise rate specific to
point clouds. Specifically, we propose a point-wise confidence selection to
obtain reliable labels from the historical predictions of each point. A
cluster-wise label correction is proposed with a voting strategy to generate
the best possible label by considering the neighbor correlations. To handle
boundary-level label noise, we also propose a variant ``PNAL-boundary " with a
progressive boundary label cleaning strategy. Extensive experiments demonstrate
its effectiveness on both synthetic and real-world noisy datasets. Even with
$60\%$ symmetric noise and high-level boundary noise, our framework
significantly outperforms its baselines, and is comparable to the upper bound
trained on completely clean data. Moreover, we cleaned the popular real-world
dataset ScanNetV2 for rigorous experiment. Our code and data is available at
https://github.com/pleaseconnectwifi/PNAL.
Related papers
- Extracting Clean and Balanced Subset for Noisy Long-tailed Classification [66.47809135771698]
We develop a novel pseudo labeling method using class prototypes from the perspective of distribution matching.
By setting a manually-specific probability measure, we can reduce the side-effects of noisy and long-tailed data simultaneously.
Our method can extract this class-balanced subset with clean labels, which brings effective performance gains for long-tailed classification with label noise.
arXiv Detail & Related papers (2024-04-10T07:34:37Z) - Group Benefits Instances Selection for Data Purification [21.977432359384835]
Existing methods for combating label noise are typically designed and tested on synthetic datasets.
We propose a method named GRIP to alleviate the noisy label problem for both synthetic and real-world datasets.
arXiv Detail & Related papers (2024-03-23T03:06:19Z) - PNT-Edge: Towards Robust Edge Detection with Noisy Labels by Learning
Pixel-level Noise Transitions [119.17602768128806]
It is hard to manually label edges accurately, especially for large datasets.
This paper proposes to learn Pixel-level NoiseTransitions to model the label-corruption process.
arXiv Detail & Related papers (2023-07-26T09:45:17Z) - Lifting Weak Supervision To Structured Prediction [12.219011764895853]
Weak supervision (WS) is a rich set of techniques that produce pseudolabels by aggregating easily obtained but potentially noisy label estimates.
We introduce techniques new to weak supervision based on pseudo-Euclidean embeddings and tensor decompositions.
Several of our results, which can be viewed as robustness guarantees in structured prediction with noisy labels, may be of independent interest.
arXiv Detail & Related papers (2022-11-24T02:02:58Z) - S3: Supervised Self-supervised Learning under Label Noise [53.02249460567745]
In this paper we address the problem of classification in the presence of label noise.
In the heart of our method is a sample selection mechanism that relies on the consistency between the annotated label of a sample and the distribution of the labels in its neighborhood in the feature space.
Our method significantly surpasses previous methods on both CIFARCIFAR100 with artificial noise and real-world noisy datasets such as WebVision and ANIMAL-10N.
arXiv Detail & Related papers (2021-11-22T15:49:20Z) - Learning with Noisy Labels for Robust Point Cloud Segmentation [22.203927159777123]
Object class labels are often mislabeled in real-world point cloud datasets.
We propose a novel Point Noise-Adaptive Learning (PNAL) framework.
We conduct extensive experiments to demonstrate the effectiveness of PNAL on both synthetic and real-world noisy datasets.
arXiv Detail & Related papers (2021-07-29T17:59:54Z) - Training Classifiers that are Universally Robust to All Label Noise
Levels [91.13870793906968]
Deep neural networks are prone to overfitting in the presence of label noise.
We propose a distillation-based framework that incorporates a new subcategory of Positive-Unlabeled learning.
Our framework generally outperforms at medium to high noise levels.
arXiv Detail & Related papers (2021-05-27T13:49:31Z) - Co-Seg: An Image Segmentation Framework Against Label Corruption [8.219887855003648]
Supervised deep learning performance is heavily tied to the availability of high-quality labels for training.
We propose a novel framework, namely Co-Seg, to collaboratively train segmentation networks on datasets which include low-quality noisy labels.
Our framework can be easily implemented in any segmentation algorithm to increase its robustness to noisy labels.
arXiv Detail & Related papers (2021-01-31T20:01:40Z) - A Second-Order Approach to Learning with Instance-Dependent Label Noise [58.555527517928596]
The presence of label noise often misleads the training of deep neural networks.
We show that the errors in human-annotated labels are more likely to be dependent on the difficulty levels of tasks.
arXiv Detail & Related papers (2020-12-22T06:36:58Z) - Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels [98.13491369929798]
We propose a framework called Class2Simi, which transforms data points with noisy class labels to data pairs with noisy similarity labels.
Class2Simi is computationally efficient because not only this transformation is on-the-fly in mini-batches, but also it just changes loss on top of model prediction into a pairwise manner.
arXiv Detail & Related papers (2020-06-14T07:55:32Z) - Label Noise Types and Their Effects on Deep Learning [0.0]
In this work, we provide a detailed analysis of the effects of different kinds of label noise on learning.
We propose a generic framework to generate feature-dependent label noise, which we show to be the most challenging case for learning.
For the ease of other researchers to test their algorithms with noisy labels, we share corrupted labels for the most commonly used benchmark datasets.
arXiv Detail & Related papers (2020-03-23T18:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.