Active Self-Training for Weakly Supervised 3D Scene Semantic
Segmentation
- URL: http://arxiv.org/abs/2209.07069v1
- Date: Thu, 15 Sep 2022 06:00:25 GMT
- Title: Active Self-Training for Weakly Supervised 3D Scene Semantic
Segmentation
- Authors: Gengxin Liu, Oliver van Kaick, Hui Huang, Ruizhen Hu
- Abstract summary: We introduce a method for weakly supervised segmentation of 3D scenes that combines self-training and active learning.
We demonstrate that our approach leads to an effective method that provides improvements in scene segmentation over previous works and baselines.
- Score: 17.27850877649498
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since the preparation of labeled data for training semantic segmentation
networks of point clouds is a time-consuming process, weakly supervised
approaches have been introduced to learn from only a small fraction of data.
These methods are typically based on learning with contrastive losses while
automatically deriving per-point pseudo-labels from a sparse set of
user-annotated labels. In this paper, our key observation is that the selection
of what samples to annotate is as important as how these samples are used for
training. Thus, we introduce a method for weakly supervised segmentation of 3D
scenes that combines self-training with active learning. The active learning
selects points for annotation that likely result in performance improvements to
the trained model, while the self-training makes efficient use of the
user-provided labels for learning the model. We demonstrate that our approach
leads to an effective method that provides improvements in scene segmentation
over previous works and baselines, while requiring only a small number of user
annotations.
Related papers
- Self-Training for Sample-Efficient Active Learning for Text Classification with Pre-Trained Language Models [3.546617486894182]
We introduce HAST, a new and effective self-training strategy, which is evaluated on four text classification benchmarks.
Results show that it outperforms the reproduced self-training approaches and reaches classification results comparable to previous experiments for three out of four datasets.
arXiv Detail & Related papers (2024-06-13T15:06:11Z) - Weakly Supervised 3D Instance Segmentation without Instance-level
Annotations [57.615325809883636]
3D semantic scene understanding tasks have achieved great success with the emergence of deep learning, but often require a huge amount of manually annotated training data.
We propose the first weakly-supervised 3D instance segmentation method that only requires categorical semantic labels as supervision.
By generating pseudo instance labels from categorical semantic labels, our designed approach can also assist existing methods for learning 3D instance segmentation at reduced annotation cost.
arXiv Detail & Related papers (2023-08-03T12:30:52Z) - You Only Need One Thing One Click: Self-Training for Weakly Supervised
3D Scene Understanding [107.06117227661204]
We propose One Thing One Click'', meaning that the annotator only needs to label one point per object.
We iteratively conduct the training and label propagation, facilitated by a graph propagation module.
Our model can be compatible to 3D instance segmentation equipped with a point-clustering strategy.
arXiv Detail & Related papers (2023-03-26T13:57:00Z) - LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds [62.49198183539889]
We propose a label-efficient semantic segmentation pipeline for outdoor scenes with LiDAR point clouds.
Our method co-designs an efficient labeling process with semi/weakly supervised learning.
Our proposed method is even highly competitive compared to the fully supervised counterpart with 100% labels.
arXiv Detail & Related papers (2022-10-14T19:13:36Z) - Active Self-Semi-Supervised Learning for Few Labeled Samples [4.713652957384158]
Training deep models with limited annotations poses a significant challenge when applied to diverse practical domains.
We propose a simple yet effective framework, active self-semi-supervised learning (AS3L)
AS3L bootstraps semi-supervised models with prior pseudo-labels (PPL)
We develop active learning and label propagation strategies to obtain accurate PPL.
arXiv Detail & Related papers (2022-03-09T07:45:05Z) - Reducing Label Effort: Self-Supervised meets Active Learning [32.4747118398236]
Recent developments in self-training have achieved very impressive results rivaling supervised learning on some datasets.
Our experiments reveal that self-training is remarkably more efficient than active learning at reducing the labeling effort.
The performance gap between active learning trained either with self-training or from scratch diminishes as we approach to the point where almost half of the dataset is labeled.
arXiv Detail & Related papers (2021-08-25T20:04:44Z) - One Thing One Click: A Self-Training Approach for Weakly Supervised 3D
Semantic Segmentation [78.36781565047656]
We propose "One Thing One Click," meaning that the annotator only needs to label one point per object.
We iteratively conduct the training and label propagation, facilitated by a graph propagation module.
Our results are also comparable to those of the fully supervised counterparts.
arXiv Detail & Related papers (2021-04-06T02:27:25Z) - Uncertainty-aware Self-training for Text Classification with Few Labels [54.13279574908808]
We study self-training as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck.
We propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network.
We show our methods leveraging only 20-30 labeled samples per class for each task for training and for validation can perform within 3% of fully supervised pre-trained language models.
arXiv Detail & Related papers (2020-06-27T08:13:58Z) - Improving Semantic Segmentation via Self-Training [75.07114899941095]
We show that we can obtain state-of-the-art results using a semi-supervised approach, specifically a self-training paradigm.
We first train a teacher model on labeled data, and then generate pseudo labels on a large set of unlabeled data.
Our robust training framework can digest human-annotated and pseudo labels jointly and achieve top performances on Cityscapes, CamVid and KITTI datasets.
arXiv Detail & Related papers (2020-04-30T17:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.