3D Semantic Segmentation in the Wild: Learning Generalized Models for
Adverse-Condition Point Clouds
- URL: http://arxiv.org/abs/2304.00690v1
- Date: Mon, 3 Apr 2023 02:39:46 GMT
- Title: 3D Semantic Segmentation in the Wild: Learning Generalized Models for
Adverse-Condition Point Clouds
- Authors: Aoran Xiao, Jiaxing Huang, Weihao Xuan, Ruijie Ren, Kangcheng Liu,
Dayan Guan, Abdulmotaleb El Saddik, Shijian Lu, Eric Xing
- Abstract summary: We introduce SemanticSTF, an adverse-weather point cloud dataset that provides dense point-level annotations.
We study all-weather 3DSS modeling under two setups: 1) domain adaptive 3DSS that adapts from normal-weather data to adverse-weather data; 2) domain generalizable 3DSS that learns all-weather 3DSS models from normal-weather data.
- Score: 39.93598343454411
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Robust point cloud parsing under all-weather conditions is crucial to level-5
autonomy in autonomous driving. However, how to learn a universal 3D semantic
segmentation (3DSS) model is largely neglected as most existing benchmarks are
dominated by point clouds captured under normal weather. We introduce
SemanticSTF, an adverse-weather point cloud dataset that provides dense
point-level annotations and allows to study 3DSS under various adverse weather
conditions. We study all-weather 3DSS modeling under two setups: 1) domain
adaptive 3DSS that adapts from normal-weather data to adverse-weather data; 2)
domain generalizable 3DSS that learns all-weather 3DSS models from
normal-weather data. Our studies reveal the challenge while existing 3DSS
methods encounter adverse-weather data, showing the great value of SemanticSTF
in steering the future endeavor along this very meaningful research direction.
In addition, we design a domain randomization technique that alternatively
randomizes the geometry styles of point clouds and aggregates their embeddings,
ultimately leading to a generalizable model that can improve 3DSS under various
adverse weather effectively. The SemanticSTF and related codes are available at
\url{https://github.com/xiaoaoran/SemanticSTF}.
Related papers
- 3DSES: an indoor Lidar point cloud segmentation dataset with real and pseudo-labels from a 3D model [1.7249361224827533]
We present 3DSES, a new dataset of indoor dense TLS colorized point clouds covering 427 m 2.
3DSES has a unique double annotation format: semantic labels annotated at the point level alongside a full 3D CAD model of the building.
We show that our model-to-cloud alignment can produce pseudo-labels on our point clouds with a > 95% accuracy, allowing us to train deep models with significant time savings.
arXiv Detail & Related papers (2025-01-29T10:09:32Z) - Robust Single Object Tracking in LiDAR Point Clouds under Adverse Weather Conditions [4.133835011820212]
3D single object tracking in LiDAR point clouds is a critical task for outdoor perception.
Despite the impressive performance of current 3DSOT methods, evaluating them on clean datasets inadequately reflects their comprehensive performance.
One of the main obstacles is the lack of adverse weather benchmarks for the evaluation of 3DSOT.
arXiv Detail & Related papers (2025-01-13T08:44:35Z) - U3DS$^3$: Unsupervised 3D Semantic Scene Segmentation [19.706172244951116]
This paper presents U3DS$3$, as a step towards completely unsupervised point cloud segmentation for any holistic 3D scenes.
The initial step of our proposed approach involves generating superpoints based on the geometric characteristics of each scene.
We then undergo a learning process through a spatial clustering-based methodology, followed by iterative training using pseudo-labels generated in accordance with the cluster centroids.
arXiv Detail & Related papers (2023-11-10T12:05:35Z) - Point-Syn2Real: Semi-Supervised Synthetic-to-Real Cross-Domain Learning
for Object Classification in 3D Point Clouds [14.056949618464394]
Object classification using LiDAR 3D point cloud data is critical for modern applications such as autonomous driving.
We propose a semi-supervised cross-domain learning approach that does not rely on manual annotations of point clouds.
We introduce Point-Syn2Real, a new benchmark dataset for cross-domain learning on point clouds.
arXiv Detail & Related papers (2022-10-31T01:53:51Z) - LiDAR Snowfall Simulation for Robust 3D Object Detection [116.10039516404743]
We propose a physically based method to simulate the effect of snowfall on real clear-weather LiDAR point clouds.
Our method samples snow particles in 2D space for each LiDAR line and uses the induced geometry to modify the measurement for each LiDAR beam.
We use our simulation to generate partially synthetic snowy LiDAR data and leverage these data for training 3D object detection models that are robust to snowfall.
arXiv Detail & Related papers (2022-03-28T21:48:26Z) - 3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D
Object Detection [111.32054128362427]
In safety-critical settings, robustness on out-of-distribution and long-tail samples is fundamental to circumvent dangerous issues.
We substantially improve the generalization of 3D object detectors to out-of-domain data by taking into account deformed point clouds during training.
We propose and share open source CrashD: a synthetic dataset of realistic damaged and rare cars.
arXiv Detail & Related papers (2021-12-09T08:50:54Z) - Lifting 2D Object Locations to 3D by Discounting LiDAR Outliers across
Objects and Views [70.1586005070678]
We present a system for automatically converting 2D mask object predictions and raw LiDAR point clouds into full 3D bounding boxes of objects.
Our method significantly outperforms previous work despite the fact that those methods use significantly more complex pipelines, 3D models and additional human-annotated external sources of prior information.
arXiv Detail & Related papers (2021-09-16T13:01:13Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - Spatio-temporal Self-Supervised Representation Learning for 3D Point
Clouds [96.9027094562957]
We introduce a-temporal representation learning framework, capable of learning from unlabeled tasks.
Inspired by how infants learn from visual data in the wild, we explore rich cues derived from the 3D data.
STRL takes two temporally-related frames from a 3D point cloud sequence as the input, transforms it with the spatial data augmentation, and learns the invariant representation self-supervisedly.
arXiv Detail & Related papers (2021-09-01T04:17:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.