Label-Efficient LiDAR Panoptic Segmentation
- URL: http://arxiv.org/abs/2503.02372v1
- Date: Tue, 04 Mar 2025 07:58:15 GMT
- Title: Label-Efficient LiDAR Panoptic Segmentation
- Authors: Ahmet Selim Çanakçı, Niclas Vödisch, Kürsat Petek, Wolfram Burgard, Abhinav Valada,
- Abstract summary: Limited-Label LiDAR Panoptic (L3PS)<n>We develop a label-efficient 2D network to generate panoptic pseudo-labels from annotated images.<n>We then introduce a novel 3D refinement module that capitalizes on the geometric properties of point clouds.
- Score: 22.440065488051047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A main bottleneck of learning-based robotic scene understanding methods is the heavy reliance on extensive annotated training data, which often limits their generalization ability. In LiDAR panoptic segmentation, this challenge becomes even more pronounced due to the need to simultaneously address both semantic and instance segmentation from complex, high-dimensional point cloud data. In this work, we address the challenge of LiDAR panoptic segmentation with very few labeled samples by leveraging recent advances in label-efficient vision panoptic segmentation. To this end, we propose a novel method, Limited-Label LiDAR Panoptic Segmentation (L3PS), which requires only a minimal amount of labeled data. Our approach first utilizes a label-efficient 2D network to generate panoptic pseudo-labels from a small set of annotated images, which are subsequently projected onto point clouds. We then introduce a novel 3D refinement module that capitalizes on the geometric properties of point clouds. By incorporating clustering techniques, sequential scan accumulation, and ground point separation, this module significantly enhances the accuracy of the pseudo-labels, improving segmentation quality by up to +10.6 PQ and +7.9 mIoU. We demonstrate that these refined pseudo-labels can be used to effectively train off-the-shelf LiDAR segmentation networks. Through extensive experiments, we show that L3PS not only outperforms existing methods but also substantially reduces the annotation burden. We release the code of our work at https://l3ps.cs.uni-freiburg.de.
Related papers
- Seg2Box: 3D Object Detection by Point-Wise Semantics Supervision [15.996707255179668]
LiDAR-based 3D object detection and semantic segmentation are critical tasks in 3D scene understanding.
Traditional detection and methods supervise their models through bounding box labels and semantic mask labels.
This paper aims to eliminate the redundancy by supervising 3D object detection using only semantic labels.
arXiv Detail & Related papers (2025-03-21T02:39:32Z) - Towards Modality-agnostic Label-efficient Segmentation with Entropy-Regularized Distribution Alignment [62.73503467108322]
This topic is widely studied in 3D point cloud segmentation due to the difficulty of annotating point clouds densely.
Until recently, pseudo-labels have been widely employed to facilitate training with limited ground-truth labels.
Existing pseudo-labeling approaches could suffer heavily from the noises and variations in unlabelled data.
We propose a novel learning strategy to regularize the pseudo-labels generated for training, thus effectively narrowing the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2024-08-29T13:31:15Z) - Beyond the Label Itself: Latent Labels Enhance Semi-supervised Point
Cloud Panoptic Segmentation [46.01433705072047]
We find two types of latent labels behind the displayed label embedded in LiDAR and image data.
We propose a novel augmentation, Cylinder-Mix, which is able to augment more yet reliable samples for training.
We also propose the Instance Position-scale Learning (IPSL) Module to learn and fuse the information of instance position and scale.
arXiv Detail & Related papers (2023-12-13T15:56:24Z) - You Only Need One Thing One Click: Self-Training for Weakly Supervised
3D Scene Understanding [107.06117227661204]
We propose One Thing One Click'', meaning that the annotator only needs to label one point per object.
We iteratively conduct the training and label propagation, facilitated by a graph propagation module.
Our model can be compatible to 3D instance segmentation equipped with a point-clustering strategy.
arXiv Detail & Related papers (2023-03-26T13:57:00Z) - LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds [62.49198183539889]
We propose a label-efficient semantic segmentation pipeline for outdoor scenes with LiDAR point clouds.
Our method co-designs an efficient labeling process with semi/weakly supervised learning.
Our proposed method is even highly competitive compared to the fully supervised counterpart with 100% labels.
arXiv Detail & Related papers (2022-10-14T19:13:36Z) - Image Understands Point Cloud: Weakly Supervised 3D Semantic
Segmentation via Association Learning [59.64695628433855]
We propose a novel cross-modality weakly supervised method for 3D segmentation, incorporating complementary information from unlabeled images.
Basically, we design a dual-branch network equipped with an active labeling strategy, to maximize the power of tiny parts of labels.
Our method even outperforms the state-of-the-art fully supervised competitors with less than 1% actively selected annotations.
arXiv Detail & Related papers (2022-09-16T07:59:04Z) - Dense Supervision Propagation for Weakly Supervised Semantic Segmentation on 3D Point Clouds [59.63231842439687]
We train a semantic point cloud segmentation network with only a small portion of points being labeled.
We propose a cross-sample feature reallocating module to transfer similar features and therefore re-route the gradients across two samples.
Our weakly supervised method with only 10% and 1% of labels can produce compatible results with the fully supervised counterpart.
arXiv Detail & Related papers (2021-07-23T14:34:57Z) - One Thing One Click: A Self-Training Approach for Weakly Supervised 3D
Semantic Segmentation [78.36781565047656]
We propose "One Thing One Click," meaning that the annotator only needs to label one point per object.
We iteratively conduct the training and label propagation, facilitated by a graph propagation module.
Our results are also comparable to those of the fully supervised counterparts.
arXiv Detail & Related papers (2021-04-06T02:27:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.