LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds
- URL: http://arxiv.org/abs/2210.08064v1
- Date: Fri, 14 Oct 2022 19:13:36 GMT
- Title: LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds
- Authors: Minghua Liu, Yin Zhou, Charles R. Qi, Boqing Gong, Hao Su, Dragomir
Anguelov
- Abstract summary: We propose a label-efficient semantic segmentation pipeline for outdoor scenes with LiDAR point clouds.
Our method co-designs an efficient labeling process with semi/weakly supervised learning.
Our proposed method is even highly competitive compared to the fully supervised counterpart with 100% labels.
- Score: 62.49198183539889
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic segmentation of LiDAR point clouds is an important task in
autonomous driving. However, training deep models via conventional supervised
methods requires large datasets which are costly to label. It is critical to
have label-efficient segmentation approaches to scale up the model to new
operational domains or to improve performance on rare cases. While most prior
works focus on indoor scenes, we are one of the first to propose a
label-efficient semantic segmentation pipeline for outdoor scenes with LiDAR
point clouds. Our method co-designs an efficient labeling process with
semi/weakly supervised learning and is applicable to nearly any 3D semantic
segmentation backbones. Specifically, we leverage geometry patterns in outdoor
scenes to have a heuristic pre-segmentation to reduce the manual labeling and
jointly design the learning targets with the labeling process. In the learning
step, we leverage prototype learning to get more descriptive point embeddings
and use multi-scan distillation to exploit richer semantics from temporally
aggregated point clouds to boost the performance of single-scan models.
Evaluated on the SemanticKITTI and the nuScenes datasets, we show that our
proposed method outperforms existing label-efficient methods. With extremely
limited human annotations (e.g., 0.1% point labels), our proposed method is
even highly competitive compared to the fully supervised counterpart with 100%
labels.
Related papers
- A Data-efficient Framework for Robotics Large-scale LiDAR Scene Parsing [10.497309421830671]
Existing state-of-the-art 3D point clouds understanding methods only perform well in a fully supervised manner.
This work presents a general and simple framework to tackle point clouds understanding when labels are limited.
arXiv Detail & Related papers (2023-12-03T02:38:51Z) - Weakly Supervised 3D Instance Segmentation without Instance-level
Annotations [57.615325809883636]
3D semantic scene understanding tasks have achieved great success with the emergence of deep learning, but often require a huge amount of manually annotated training data.
We propose the first weakly-supervised 3D instance segmentation method that only requires categorical semantic labels as supervision.
By generating pseudo instance labels from categorical semantic labels, our designed approach can also assist existing methods for learning 3D instance segmentation at reduced annotation cost.
arXiv Detail & Related papers (2023-08-03T12:30:52Z) - Active Self-Training for Weakly Supervised 3D Scene Semantic
Segmentation [17.27850877649498]
We introduce a method for weakly supervised segmentation of 3D scenes that combines self-training and active learning.
We demonstrate that our approach leads to an effective method that provides improvements in scene segmentation over previous works and baselines.
arXiv Detail & Related papers (2022-09-15T06:00:25Z) - Collaborative Propagation on Multiple Instance Graphs for 3D Instance
Segmentation with Single-point Supervision [63.429704654271475]
We propose a novel weakly supervised method RWSeg that only requires labeling one object with one point.
With these sparse weak labels, we introduce a unified framework with two branches to propagate semantic and instance information.
Specifically, we propose a Cross-graph Competing Random Walks (CRW) algorithm that encourages competition among different instance graphs.
arXiv Detail & Related papers (2022-08-10T02:14:39Z) - One Thing One Click: A Self-Training Approach for Weakly Supervised 3D
Semantic Segmentation [78.36781565047656]
We propose "One Thing One Click," meaning that the annotator only needs to label one point per object.
We iteratively conduct the training and label propagation, facilitated by a graph propagation module.
Our results are also comparable to those of the fully supervised counterparts.
arXiv Detail & Related papers (2021-04-06T02:27:25Z) - Label-Efficient Point Cloud Semantic Segmentation: An Active Learning
Approach [35.23982484919796]
We propose a more realistic annotation counting scheme so that a fair benchmark is possible.
To better exploit labeling budget, we adopt a super-point based active learning strategy.
Experiments on two benchmark datasets demonstrate the efficacy of our proposed active learning strategy.
arXiv Detail & Related papers (2021-01-18T08:37:21Z) - Few-shot 3D Point Cloud Semantic Segmentation [138.80825169240302]
We propose a novel attention-aware multi-prototype transductive few-shot point cloud semantic segmentation method.
Our proposed method shows significant and consistent improvements compared to baselines in different few-shot point cloud semantic segmentation settings.
arXiv Detail & Related papers (2020-06-22T08:05:25Z) - Weakly Supervised Semantic Point Cloud Segmentation:Towards 10X Fewer
Labels [77.65554439859967]
We propose a weakly supervised point cloud segmentation approach which requires only a tiny fraction of points to be labelled in the training stage.
Experiments are done on three public datasets with different degrees of weak supervision.
arXiv Detail & Related papers (2020-04-08T16:14:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.