SynLiDAR: Learning From Synthetic LiDAR Sequential Point Cloud for
Semantic Segmentation
- URL: http://arxiv.org/abs/2107.05399v1
- Date: Mon, 12 Jul 2021 12:51:08 GMT
- Title: SynLiDAR: Learning From Synthetic LiDAR Sequential Point Cloud for
Semantic Segmentation
- Authors: Aoran Xiao, Jiaxing Huang, Dayan Guan, Fangneng Zhan, Shijian Lu
- Abstract summary: SynLiDAR is a synthetic LiDAR point cloud dataset with accurate geometric shapes and comprehensive semantic classes.
PCT-Net is a point cloud translation network that aims to narrow down the gap with real-world point cloud data.
Experiments over multiple data augmentation and semi-supervised semantic segmentation tasks show very positive outcomes.
- Score: 37.00112978096702
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transfer learning from synthetic to real data has been proved an effective
way of mitigating data annotation constraints in various computer vision tasks.
However, the developments focused on 2D images but lag far behind for 3D point
clouds due to the lack of large-scale high-quality synthetic point cloud data
and effective transfer methods. We address this issue by collecting SynLiDAR, a
synthetic LiDAR point cloud dataset that contains large-scale point-wise
annotated point cloud with accurate geometric shapes and comprehensive semantic
classes, and designing PCT-Net, a point cloud translation network that aims to
narrow down the gap with real-world point cloud data. For SynLiDAR, we leverage
graphic tools and professionals who construct multiple realistic virtual
environments with rich scene types and layouts where annotated LiDAR points can
be generated automatically. On top of that, PCT-Net disentangles
synthetic-to-real gaps into an appearance component and a sparsity component
and translates SynLiDAR by aligning the two components with real-world data
separately. Extensive experiments over multiple data augmentation and
semi-supervised semantic segmentation tasks show very positive outcomes -
including SynLiDAR can either train better models or reduce real-world
annotated data without sacrificing performance, and PCT-Net translated data
further improve model performance consistently.
Related papers
- AutoSynth: Learning to Generate 3D Training Data for Object Point Cloud
Registration [69.21282992341007]
Auto Synth automatically generates 3D training data for point cloud registration.
We replace the point cloud registration network with a much smaller surrogate network, leading to a $4056.43$ speedup.
Our results on TUD-L, LINEMOD and Occluded-LINEMOD evidence that a neural network trained on our searched dataset yields consistently better performance than the same one trained on the widely used ModelNet40 dataset.
arXiv Detail & Related papers (2023-09-20T09:29:44Z) - A New Benchmark: On the Utility of Synthetic Data with Blender for Bare
Supervised Learning and Downstream Domain Adaptation [42.2398858786125]
Deep learning in computer vision has achieved great success with the price of large-scale labeled training data.
The uncontrollable data collection process produces non-IID training and test data, where undesired duplication may exist.
To circumvent them, an alternative is to generate synthetic data via 3D rendering with domain randomization.
arXiv Detail & Related papers (2023-03-16T09:03:52Z) - Point-Syn2Real: Semi-Supervised Synthetic-to-Real Cross-Domain Learning
for Object Classification in 3D Point Clouds [14.056949618464394]
Object classification using LiDAR 3D point cloud data is critical for modern applications such as autonomous driving.
We propose a semi-supervised cross-domain learning approach that does not rely on manual annotations of point clouds.
We introduce Point-Syn2Real, a new benchmark dataset for cross-domain learning on point clouds.
arXiv Detail & Related papers (2022-10-31T01:53:51Z) - TRoVE: Transforming Road Scene Datasets into Photorealistic Virtual
Environments [84.6017003787244]
This work proposes a synthetic data generation pipeline to address the difficulties and domain-gaps present in simulated datasets.
We show that using annotations and visual cues from existing datasets, we can facilitate automated multi-modal data generation.
arXiv Detail & Related papers (2022-08-16T20:46:08Z) - Dual Adaptive Transformations for Weakly Supervised Point Cloud
Segmentation [78.6612285236938]
We propose a novel DAT (textbfDual textbfAdaptive textbfTransformations) model for weakly supervised point cloud segmentation.
We evaluate our proposed DAT model with two popular backbones on the large-scale S3DIS and ScanNet-V2 datasets.
arXiv Detail & Related papers (2022-07-19T05:43:14Z) - STPLS3D: A Large-Scale Synthetic and Real Aerial Photogrammetry 3D Point
Cloud Dataset [6.812704277866377]
We introduce a synthetic aerial photogrammetry point clouds generation pipeline.
Unlike generating synthetic data in virtual games, the proposed pipeline simulates the reconstruction process of the real environment.
We present a richly-annotated synthetic 3D aerial photogrammetry point cloud dataset.
arXiv Detail & Related papers (2022-03-17T03:50:40Z) - Quasi-Balanced Self-Training on Noise-Aware Synthesis of Object Point
Clouds for Closing Domain Gap [34.590531549797355]
We propose an integrated scheme consisting of physically realistic synthesis of object point clouds via rendering stereo images via projection of speckle patterns onto CAD models.
Experiment results can verify the effectiveness of our method as well as both of its modules for unsupervised domain adaptation on point cloud classification.
arXiv Detail & Related papers (2022-03-08T03:44:49Z) - SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization [52.20602782690776]
It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2020-12-08T14:14:09Z) - ePointDA: An End-to-End Simulation-to-Real Domain Adaptation Framework
for LiDAR Point Cloud Segmentation [111.56730703473411]
Training deep neural networks (DNNs) on LiDAR data requires large-scale point-wise annotations.
Simulation-to-real domain adaptation (SRDA) trains a DNN using unlimited synthetic data with automatically generated labels.
ePointDA consists of three modules: self-supervised dropout noise rendering, statistics-invariant and spatially-adaptive feature alignment, and transferable segmentation learning.
arXiv Detail & Related papers (2020-09-07T23:46:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.