A Spatiotemporal Correspondence Approach to Unsupervised LiDAR
Segmentation with Traffic Applications
- URL: http://arxiv.org/abs/2308.12433v1
- Date: Wed, 23 Aug 2023 21:32:46 GMT
- Title: A Spatiotemporal Correspondence Approach to Unsupervised LiDAR
Segmentation with Traffic Applications
- Authors: Xiao Li, Pan He, Aotian Wu, Sanjay Ranka, Anand Rangarajan
- Abstract summary: Key idea is to leverage the nature of a dynamic point cloud sequence and introduce drastically stronger scenarios.
We alternate between optimizing semantic into groups and clustering using point-wisetemporal labels.
Our method can learn discriminative features in an unsupervised learning fashion.
- Score: 16.260518238832887
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address the problem of unsupervised semantic segmentation of outdoor LiDAR
point clouds in diverse traffic scenarios. The key idea is to leverage the
spatiotemporal nature of a dynamic point cloud sequence and introduce
drastically stronger augmentation by establishing spatiotemporal
correspondences across multiple frames. We dovetail clustering and pseudo-label
learning in this work. Essentially, we alternate between clustering points into
semantic groups and optimizing models using point-wise pseudo-spatiotemporal
labels with a simple learning objective. Therefore, our method can learn
discriminative features in an unsupervised learning fashion. We show promising
segmentation performance on Semantic-KITTI, SemanticPOSS, and FLORIDA benchmark
datasets covering scenarios in autonomous vehicle and intersection
infrastructure, which is competitive when compared against many existing fully
supervised learning methods. This general framework can lead to a unified
representation learning approach for LiDAR point clouds incorporating domain
knowledge.
Related papers
- Point Contrastive Prediction with Semantic Clustering for
Self-Supervised Learning on Point Cloud Videos [71.20376514273367]
We propose a unified point cloud video self-supervised learning framework for object-centric and scene-centric data.
Our method outperforms supervised counterparts on a wide range of downstream tasks.
arXiv Detail & Related papers (2023-08-18T02:17:47Z) - Spatiotemporal Self-supervised Learning for Point Clouds in the Wild [65.56679416475943]
We introduce an SSL strategy that leverages positive pairs in both the spatial and temporal domain.
We demonstrate the benefits of our approach via extensive experiments performed by self-supervised training on two large-scale LiDAR datasets.
arXiv Detail & Related papers (2023-03-28T18:06:22Z) - Rethinking Range View Representation for LiDAR Segmentation [66.73116059734788]
"Many-to-one" mapping, semantic incoherence, and shape deformation are possible impediments against effective learning from range view projections.
We present RangeFormer, a full-cycle framework comprising novel designs across network architecture, data augmentation, and post-processing.
We show that, for the first time, a range view method is able to surpass the point, voxel, and multi-view fusion counterparts in the competing LiDAR semantic and panoptic segmentation benchmarks.
arXiv Detail & Related papers (2023-03-09T16:13:27Z) - Few-Shot Point Cloud Semantic Segmentation via Contrastive
Self-Supervision and Multi-Resolution Attention [6.350163959194903]
We propose a contrastive self-supervision framework for few-shot learning pretrain.
Specifically, we implement a novel contrastive learning approach with a learnable augmentor for a 3D point cloud.
We develop a multi-resolution attention module using both the nearest and farthest points to extract the local and global point information more effectively.
arXiv Detail & Related papers (2023-02-21T07:59:31Z) - LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds [62.49198183539889]
We propose a label-efficient semantic segmentation pipeline for outdoor scenes with LiDAR point clouds.
Our method co-designs an efficient labeling process with semi/weakly supervised learning.
Our proposed method is even highly competitive compared to the fully supervised counterpart with 100% labels.
arXiv Detail & Related papers (2022-10-14T19:13:36Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Unsupervised Representation Learning for Time Series with Temporal
Neighborhood Coding [8.45908939323268]
We propose a self-supervised framework for learning generalizable representations for non-stationary time series.
Our motivation stems from the medical field, where the ability to model the dynamic nature of time series data is especially valuable.
arXiv Detail & Related papers (2021-06-01T19:53:24Z) - Self-Supervised Learning of Lidar Segmentation for Autonomous Indoor
Navigation [17.46116398744719]
We present a self-supervised learning approach for the semantic segmentation of lidar frames.
Our method is used to train a deep point cloud segmentation architecture without any human annotation.
We provide insights into our network predictions and show that our approach can also improve the performances of common localization techniques.
arXiv Detail & Related papers (2020-12-10T18:58:10Z) - Panoster: End-to-end Panoptic Segmentation of LiDAR Point Clouds [81.12016263972298]
We present Panoster, a novel proposal-free panoptic segmentation method for LiDAR point clouds.
Unlike previous approaches, Panoster proposes a simplified framework incorporating a learning-based clustering solution to identify instances.
At inference time, this acts as a class-agnostic segmentation, allowing Panoster to be fast, while outperforming prior methods in terms of accuracy.
arXiv Detail & Related papers (2020-10-28T18:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.