Scribble-Supervised LiDAR Semantic Segmentation
- URL: http://arxiv.org/abs/2203.08537v1
- Date: Wed, 16 Mar 2022 11:01:23 GMT
- Title: Scribble-Supervised LiDAR Semantic Segmentation
- Authors: Ozan Unal and Dengxin Dai and Luc Van Gool
- Abstract summary: We propose using scribbles to annotate LiDAR point clouds and release ScribbleKITTI, the first scribble-annotated dataset for LiDAR semantic segmentation.
Our pipeline comprises of three stand-alone contributions that can be combined with any LiDAR semantic segmentation model to achieve up to 95.7% of the fully-supervised performance.
- Score: 102.62963605429508
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Densely annotating LiDAR point clouds remains too expensive and
time-consuming to keep up with the ever growing volume of data. While current
literature focuses on fully-supervised performance, developing efficient
methods that take advantage of realistic weak supervision have yet to be
explored. In this paper, we propose using scribbles to annotate LiDAR point
clouds and release ScribbleKITTI, the first scribble-annotated dataset for
LiDAR semantic segmentation. Furthermore, we present a pipeline to reduce the
performance gap that arises when using such weak annotations. Our pipeline
comprises of three stand-alone contributions that can be combined with any
LiDAR semantic segmentation model to achieve up to 95.7% of the
fully-supervised performance while using only 8% labeled points. Our scribble
annotations and code are available at github.com/ouenal/scribblekitti.
Related papers
- Label-Efficient LiDAR Panoptic Segmentation [22.440065488051047]
Limited-Label LiDAR Panoptic (L3PS)
We develop a label-efficient 2D network to generate panoptic pseudo-labels from annotated images.
We then introduce a novel 3D refinement module that capitalizes on the geometric properties of point clouds.
arXiv Detail & Related papers (2025-03-04T07:58:15Z) - TeFF: Tracking-enhanced Forgetting-free Few-shot 3D LiDAR Semantic Segmentation [10.628870775939161]
This paper addresses the limitations of current few-shot semantic segmentation by exploiting the temporal continuity of LiDAR data.
We employ a tracking model to generate pseudo-ground-truths from a sequence of LiDAR frames, enhancing the dataset's ability to learn on novel classes.
We incorporate LoRA, a technique that reduces the number of trainable parameters, thereby preserving the model's performance on base classes while improving its adaptability to novel classes.
arXiv Detail & Related papers (2024-08-28T09:18:36Z) - Weakly Supervised LiDAR Semantic Segmentation via Scatter Image Annotation [38.715754110667916]
We implement LiDAR semantic segmentation using scatter image annotation.
We also propose ScatterNet, a network that includes three pivotal strategies to reduce the performance gap.
Our method requires less than 0.02% of the labeled points to achieve over 95% of the performance of fully-supervised methods.
arXiv Detail & Related papers (2024-04-19T13:01:30Z) - Learning Tracking Representations from Single Point Annotations [49.47550029470299]
We propose to learn tracking representations from single point annotations in a weakly supervised manner.
Specifically, we propose a soft contrastive learning framework that incorporates target objectness prior to end-to-end contrastive learning.
arXiv Detail & Related papers (2024-04-15T06:50:58Z) - Self-Supervised Pre-Training Boosts Semantic Scene Segmentation on LiDAR
Data [0.0]
We propose to train a self-supervised encoder with Barlow Twins and use it as a pre-trained network in the task of semantic scene segmentation.
The experimental results demonstrate that our unsupervised pre-training boosts performance once fine-tuned on the supervised task.
arXiv Detail & Related papers (2023-09-05T11:29:30Z) - Exploring Active 3D Object Detection from a Generalization Perspective [58.597942380989245]
Uncertainty-based active learning policies fail to balance the trade-off between point cloud informativeness and box-level annotation costs.
We propose textscCrb, which hierarchically filters out the point clouds of redundant 3D bounding box labels.
Experiments show that the proposed approach outperforms existing active learning strategies.
arXiv Detail & Related papers (2023-01-23T02:43:03Z) - LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds [62.49198183539889]
We propose a label-efficient semantic segmentation pipeline for outdoor scenes with LiDAR point clouds.
Our method co-designs an efficient labeling process with semi/weakly supervised learning.
Our proposed method is even highly competitive compared to the fully supervised counterpart with 100% labels.
arXiv Detail & Related papers (2022-10-14T19:13:36Z) - LaserMix for Semi-Supervised LiDAR Semantic Segmentation [56.73779694312137]
We study the underexplored semi-supervised learning (SSL) in LiDAR segmentation.
Our core idea is to leverage the strong spatial cues of LiDAR point clouds to better exploit unlabeled data.
We propose LaserMix to mix laser beams from different LiDAR scans, and then encourage the model to make consistent and confident predictions.
arXiv Detail & Related papers (2022-06-30T18:00:04Z) - PointMatch: A Consistency Training Framework for Weakly Supervised
Semantic Segmentation of 3D Point Clouds [117.77841399002666]
We propose a novel framework, PointMatch, that stands on both data and label, by applying consistency regularization to sufficiently probe information from data itself.
The proposed PointMatch achieves the state-of-the-art performance under various weakly-supervised schemes on both ScanNet-v2 and S3DIS datasets.
arXiv Detail & Related papers (2022-02-22T07:26:31Z) - Semi-supervised Implicit Scene Completion from Sparse LiDAR [11.136332180451308]
We develop a novel formulation that conditions the semi-supervised implicit function on localized shape embeddings.
It exploits the strong representation learning power of sparse convolutional networks to generate shape-aware dense feature volumes.
We demonstrate intrinsic properties of this new learning system and its usefulness in real-world road scenes.
arXiv Detail & Related papers (2021-11-29T18:50:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.