SuperLine3D: Self-supervised Line Segmentation and Description for LiDAR
Point Cloud
- URL: http://arxiv.org/abs/2208.01925v1
- Date: Wed, 3 Aug 2022 09:06:14 GMT
- Title: SuperLine3D: Self-supervised Line Segmentation and Description for LiDAR
Point Cloud
- Authors: Xiangrui Zhao, Sheng Yang, Tianxin Huang, Jun Chen, Teng Ma, Mingyang
Li and Yong Liu
- Abstract summary: We propose the first learning-based feature segmentation and description model for 3D lines in LiDAR point cloud.
Our model can extract lines under arbitrary scale perturbations, and we use shared EdgeConv encoder layers to train the two segmentation and descriptor heads jointly.
Experiments have demonstrated that our line-based registration method is highly competitive to state-of-the-art point-based approaches.
- Score: 35.16632339908634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Poles and building edges are frequently observable objects on urban roads,
conveying reliable hints for various computer vision tasks. To repetitively
extract them as features and perform association between discrete LiDAR frames
for registration, we propose the first learning-based feature segmentation and
description model for 3D lines in LiDAR point cloud. To train our model without
the time consuming and tedious data labeling process, we first generate
synthetic primitives for the basic appearance of target lines, and build an
iterative line auto-labeling process to gradually refine line labels on real
LiDAR scans. Our segmentation model can extract lines under arbitrary scale
perturbations, and we use shared EdgeConv encoder layers to train the two
segmentation and descriptor heads jointly. Base on the model, we can build a
highly-available global registration module for point cloud registration, in
conditions without initial transformation hints. Experiments have demonstrated
that our line-based registration method is highly competitive to
state-of-the-art point-based approaches. Our code is available at
https://github.com/zxrzju/SuperLine3D.git.
Related papers
- Better Call SAL: Towards Learning to Segment Anything in Lidar [63.9984147657437]
We propose a text-promptable zero-shot model for segmenting and classifying any object in Lidar.
We utilize 2D vision foundation models to generate 3D supervision for free'' using pseudo-labels.
Our model achieves $91%$ in terms of class-agnostic and $54%$ in terms of zero-shot Lidar Panopticon.
arXiv Detail & Related papers (2024-03-19T19:58:54Z) - From CAD models to soft point cloud labels: An automatic annotation
pipeline for cheaply supervised 3D semantic segmentation [0.0]
We propose a fully automatic annotation scheme that takes a raw 3D point cloud with a set of fitted CAD models as input and outputs convincing point-wise labels.
Compared with manual annotations, we show that our automatic labels are accurate while drastically reducing the annotation time.
We evaluate the label quality and segmentation performance of PointNet++ on a dataset of real industrial point clouds and Scan2CAD, a public dataset of indoor scenes.
arXiv Detail & Related papers (2023-02-06T20:33:16Z) - LWSIS: LiDAR-guided Weakly Supervised Instance Segmentation for
Autonomous Driving [34.119642131912485]
We present a more artful framework, LiDAR-guided Weakly Supervised Instance (LWSIS)
LWSIS uses the off-the-shelf 3D data, i.e., Point Cloud, together with the 3D boxes, as natural weak supervisions for training the 2D image instance segmentation models.
Our LWSIS not only exploits the complementary information in multimodal data during training, but also significantly reduces the cost of the dense 2D masks.
arXiv Detail & Related papers (2022-12-07T08:08:01Z) - LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds [62.49198183539889]
We propose a label-efficient semantic segmentation pipeline for outdoor scenes with LiDAR point clouds.
Our method co-designs an efficient labeling process with semi/weakly supervised learning.
Our proposed method is even highly competitive compared to the fully supervised counterpart with 100% labels.
arXiv Detail & Related papers (2022-10-14T19:13:36Z) - Image Understands Point Cloud: Weakly Supervised 3D Semantic
Segmentation via Association Learning [59.64695628433855]
We propose a novel cross-modality weakly supervised method for 3D segmentation, incorporating complementary information from unlabeled images.
Basically, we design a dual-branch network equipped with an active labeling strategy, to maximize the power of tiny parts of labels.
Our method even outperforms the state-of-the-art fully supervised competitors with less than 1% actively selected annotations.
arXiv Detail & Related papers (2022-09-16T07:59:04Z) - Dual Adaptive Transformations for Weakly Supervised Point Cloud
Segmentation [78.6612285236938]
We propose a novel DAT (textbfDual textbfAdaptive textbfTransformations) model for weakly supervised point cloud segmentation.
We evaluate our proposed DAT model with two popular backbones on the large-scale S3DIS and ScanNet-V2 datasets.
arXiv Detail & Related papers (2022-07-19T05:43:14Z) - SCSS-Net: Superpoint Constrained Semi-supervised Segmentation Network
for 3D Indoor Scenes [6.3364439467281315]
We propose a superpoint constrained semi-supervised segmentation network for 3D point clouds, named as SCSS-Net.
Specifically, we use the pseudo labels predicted from unlabeled point clouds for self-training, and the superpoints produced by geometry-based and color-based Region Growing algorithms are combined to modify and delete pseudo labels with low confidence.
arXiv Detail & Related papers (2021-07-08T04:43:21Z) - SOLD2: Self-supervised Occlusion-aware Line Description and Detection [95.8719432775724]
We introduce the first joint detection and description of line segments in a single deep network.
Our method does not require any annotated line labels and can therefore generalize to any dataset.
We evaluate our approach against previous line detection and description methods on several multi-view datasets.
arXiv Detail & Related papers (2021-04-07T19:27:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.