Towards Robust Few-shot Point Cloud Semantic Segmentation
- URL: http://arxiv.org/abs/2309.11228v1
- Date: Wed, 20 Sep 2023 11:40:10 GMT
- Title: Towards Robust Few-shot Point Cloud Semantic Segmentation
- Authors: Yating Xu, Na Zhao, Gim Hee Lee
- Abstract summary: Few-shot point cloud semantic segmentation aims to train a model to quickly adapt to new unseen classes with only a handful of support set samples.
We propose a Component-level Clean Noise Separation (CCNS) representation learning to learn discriminative feature representations.
We also propose a Multi-scale Degree-based Noise Suppression (MDNS) scheme to remove the noisy shots from the support set.
- Score: 57.075074484313
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot point cloud semantic segmentation aims to train a model to quickly
adapt to new unseen classes with only a handful of support set samples.
However, the noise-free assumption in the support set can be easily violated in
many practical real-world settings. In this paper, we focus on improving the
robustness of few-shot point cloud segmentation under the detrimental influence
of noisy support sets during testing time. To this end, we first propose a
Component-level Clean Noise Separation (CCNS) representation learning to learn
discriminative feature representations that separates the clean samples of the
target classes from the noisy samples. Leveraging the well separated clean and
noisy support samples from our CCNS, we further propose a Multi-scale
Degree-based Noise Suppression (MDNS) scheme to remove the noisy shots from the
support set. We conduct extensive experiments on various noise settings on two
benchmark datasets. Our results show that the combination of CCNS and MDNS
significantly improves the performance. Our code is available at
https://github.com/Pixie8888/R3DFSSeg.
Related papers
- Fast Learning of Signed Distance Functions from Noisy Point Clouds via Noise to Noise Mapping [54.38209327518066]
Learning signed distance functions from point clouds is an important task in 3D computer vision.
We propose to learn SDFs via a noise to noise mapping, which does not require any clean point cloud or ground truth supervision.
Our novelty lies in the noise to noise mapping which can infer a highly accurate SDF of a single object or scene from its multiple or even single noisy observations.
arXiv Detail & Related papers (2024-07-04T03:35:02Z) - Stable Neighbor Denoising for Source-free Domain Adaptive Segmentation [91.83820250747935]
Pseudo-label noise is mainly contained in unstable samples in which predictions of most pixels undergo significant variations during self-training.
We introduce the Stable Neighbor Denoising (SND) approach, which effectively discovers highly correlated stable and unstable samples.
SND consistently outperforms state-of-the-art methods in various SFUDA semantic segmentation settings.
arXiv Detail & Related papers (2024-06-10T21:44:52Z) - Negative Pre-aware for Noisy Cross-modal Matching [46.5591267410225]
Cross-modal noise-robust learning is a challenging task since noisy correspondence is hard to recognize and rectify.
We present a novel Negative Pre-aware Cross-modal matching solution for large visual-language model fine-tuning on noisy downstream tasks.
arXiv Detail & Related papers (2023-12-10T05:52:36Z) - Neighborhood Collective Estimation for Noisy Label Identification and
Correction [92.20697827784426]
Learning with noisy labels (LNL) aims at designing strategies to improve model performance and generalization by mitigating the effects of model overfitting to noisy labels.
Recent advances employ the predicted label distributions of individual samples to perform noise verification and noisy label correction, easily giving rise to confirmation bias.
We propose Neighborhood Collective Estimation, in which the predictive reliability of a candidate sample is re-estimated by contrasting it against its feature-space nearest neighbors.
arXiv Detail & Related papers (2022-08-05T14:47:22Z) - Identifying Hard Noise in Long-Tailed Sample Distribution [76.16113794808001]
We introduce Noisy Long-Tailed Classification (NLT)
Most de-noising methods fail to identify the hard noises.
We design an iterative noisy learning framework called Hard-to-Easy (H2E)
arXiv Detail & Related papers (2022-07-27T09:03:03Z) - ProMix: Combating Label Noise via Maximizing Clean Sample Utility [18.305972075220765]
ProMix is a framework to maximize the utility of clean samples for boosted performance.
It achieves an average improvement of 2.48% on the CIFAR-N dataset.
arXiv Detail & Related papers (2022-07-21T03:01:04Z) - Training Classifiers that are Universally Robust to All Label Noise
Levels [91.13870793906968]
Deep neural networks are prone to overfitting in the presence of label noise.
We propose a distillation-based framework that incorporates a new subcategory of Positive-Unlabeled learning.
Our framework generally outperforms at medium to high noise levels.
arXiv Detail & Related papers (2021-05-27T13:49:31Z) - Differentiable Manifold Reconstruction for Point Cloud Denoising [23.33652755967715]
3D point clouds are often perturbed by noise due to the inherent limitation of acquisition equipments.
We propose to learn the underlying manifold of a noisy point cloud from differentiably subsampled points.
We show that our method significantly outperforms state-of-the-art denoising methods under both synthetic noise and real world noise.
arXiv Detail & Related papers (2020-07-27T13:31:41Z) - PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks
with Adaptive Sampling [39.36827481232841]
We present a novel end-to-end network for robust point clouds processing, named PointASNL.
Key component in our approach is the adaptive sampling (AS) module.
Our AS module can not only benefit the feature learning of point clouds, but also ease the biased effect of outliers.
arXiv Detail & Related papers (2020-03-01T14:04:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.