All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation
- URL: http://arxiv.org/abs/2305.15832v2
- Date: Fri, 20 Oct 2023 14:03:05 GMT
- Title: All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation
- Authors: Liyao Tang, Zhe Chen, Shanshan Zhao, Chaoyue Wang, Dacheng Tao
- Abstract summary: Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
- Score: 67.30502812804271
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks
where only sparse ground-truth labels are available for learning. Existing
methods often rely on empirical label selection strategies, such as confidence
thresholding, to generate beneficial pseudo-labels for model training. This
approach may, however, hinder the comprehensive exploitation of unlabeled data
points. We hypothesize that this selective usage arises from the noise in
pseudo-labels generated on unlabeled data. The noise in pseudo-labels may
result in significant discrepancies between pseudo-labels and model
predictions, thus confusing and affecting the model training greatly. To
address this issue, we propose a novel learning strategy to regularize the
generated pseudo-labels and effectively narrow the gaps between pseudo-labels
and model predictions. More specifically, our method introduces an Entropy
Regularization loss and a Distribution Alignment loss for weakly supervised
learning in 3D segmentation tasks, resulting in an ERDA learning strategy.
Interestingly, by using KL distance to formulate the distribution alignment
loss, it reduces to a deceptively simple cross-entropy-based loss which
optimizes both the pseudo-label generation network and the 3D segmentation
network simultaneously. Despite the simplicity, our method promisingly improves
the performance. We validate the effectiveness through extensive experiments on
various baselines and large-scale datasets. Results show that ERDA effectively
enables the effective usage of all unlabeled data points for learning and
achieves state-of-the-art performance under different settings. Remarkably, our
method can outperform fully-supervised baselines using only 1% of true
annotations. Code and model will be made publicly available at
https://github.com/LiyaoTang/ERDA.
Related papers
- Towards Modality-agnostic Label-efficient Segmentation with Entropy-Regularized Distribution Alignment [62.73503467108322]
This topic is widely studied in 3D point cloud segmentation due to the difficulty of annotating point clouds densely.
Until recently, pseudo-labels have been widely employed to facilitate training with limited ground-truth labels.
Existing pseudo-labeling approaches could suffer heavily from the noises and variations in unlabelled data.
We propose a novel learning strategy to regularize the pseudo-labels generated for training, thus effectively narrowing the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2024-08-29T13:31:15Z) - Decoupled Prototype Learning for Reliable Test-Time Adaptation [50.779896759106784]
Test-time adaptation (TTA) is a task that continually adapts a pre-trained source model to the target domain during inference.
One popular approach involves fine-tuning model with cross-entropy loss according to estimated pseudo-labels.
This study reveals that minimizing the classification error of each sample causes the cross-entropy loss's vulnerability to label noise.
We propose a novel Decoupled Prototype Learning (DPL) method that features prototype-centric loss computation.
arXiv Detail & Related papers (2024-01-15T03:33:39Z) - ERASE: Error-Resilient Representation Learning on Graphs for Label Noise
Tolerance [53.73316938815873]
We propose a method called ERASE (Error-Resilient representation learning on graphs for lAbel noiSe tolerancE) to learn representations with error tolerance.
ERASE combines prototype pseudo-labels with propagated denoised labels and updates representations with error resilience.
Our method can outperform multiple baselines with clear margins in broad noise levels and enjoy great scalability.
arXiv Detail & Related papers (2023-12-13T17:59:07Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - Pseudo-Label Noise Suppression Techniques for Semi-Supervised Semantic
Segmentation [21.163070161951868]
Semi-consuming learning (SSL) can reduce the need for large labelled datasets by incorporating unsupervised data into the training.
Current SSL approaches use an initially supervised trained model to generate predictions for unlabelled images, called pseudo-labels.
We use three mechanisms to control pseudo-label noise and errors.
arXiv Detail & Related papers (2022-10-19T09:46:27Z) - Re-distributing Biased Pseudo Labels for Semi-supervised Semantic
Segmentation: A Baseline Investigation [30.688753736660725]
We present a simple and yet effective Distribution Alignment and Random Sampling (DARS) method to produce unbiased pseudo labels.
Our method performs favorably in comparison with state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-23T14:45:14Z) - Weakly Supervised Pseudo-Label assisted Learning for ALS Point Cloud
Semantic Segmentation [1.4620086904601473]
Competitive point cloud results usually rely on a large amount of labeled data.
In this study, we propose a pseudo-labeling strategy to obtain accurate results with limited ground truth.
arXiv Detail & Related papers (2021-05-05T08:07:21Z) - PseudoSeg: Designing Pseudo Labels for Semantic Segmentation [78.35515004654553]
We present a re-design of pseudo-labeling to generate structured pseudo labels for training with unlabeled or weakly-labeled data.
We demonstrate the effectiveness of the proposed pseudo-labeling strategy in both low-data and high-data regimes.
arXiv Detail & Related papers (2020-10-19T17:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.