PolarMix: A General Data Augmentation Technique for LiDAR Point Clouds
- URL: http://arxiv.org/abs/2208.00223v1
- Date: Sat, 30 Jul 2022 13:52:19 GMT
- Title: PolarMix: A General Data Augmentation Technique for LiDAR Point Clouds
- Authors: Aoran Xiao, Jiaxing Huang, Dayan Guan, Kaiwen Cui, Shijian Lu, Ling
Shao
- Abstract summary: PolarMix is a point cloud augmentation technique that is simple and generic.
It can work as plug-and-play for various 3D deep architectures and also performs well for unsupervised domain adaptation.
- Score: 100.03877236181546
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LiDAR point clouds, which are usually scanned by rotating LiDAR sensors
continuously, capture precise geometry of the surrounding environment and are
crucial to many autonomous detection and navigation tasks. Though many 3D deep
architectures have been developed, efficient collection and annotation of large
amounts of point clouds remain one major challenge in the analytic and
understanding of point cloud data. This paper presents PolarMix, a point cloud
augmentation technique that is simple and generic but can mitigate the data
constraint effectively across different perception tasks and scenarios.
PolarMix enriches point cloud distributions and preserves point cloud fidelity
via two cross-scan augmentation strategies that cut, edit, and mix point clouds
along the scanning direction. The first is scene-level swapping which exchanges
point cloud sectors of two LiDAR scans that are cut along the azimuth axis. The
second is instance-level rotation and paste which crops point instances from
one LiDAR scan, rotates them by multiple angles (to create multiple copies),
and paste the rotated point instances into other scans. Extensive experiments
show that PolarMix achieves superior performance consistently across different
perception tasks and scenarios. In addition, it can work as plug-and-play for
various 3D deep architectures and also performs well for unsupervised domain
adaptation.
Related papers
- Simultaneous Diffusion Sampling for Conditional LiDAR Generation [24.429704313319398]
This paper proposes a novel simultaneous diffusion sampling methodology to generate point clouds conditioned on the 3D structure of the scene.
Our method can produce accurate and geometrically consistent enhancements to point cloud scans, allowing it to outperform existing methods by a large margin in a variety of benchmarks.
arXiv Detail & Related papers (2024-10-15T14:15:04Z) - P2P-Bridge: Diffusion Bridges for 3D Point Cloud Denoising [81.92854168911704]
We tackle the task of point cloud denoising through a novel framework that adapts Diffusion Schr"odinger bridges to points clouds.
Experiments on object datasets show that P2P-Bridge achieves significant improvements over existing methods.
arXiv Detail & Related papers (2024-08-29T08:00:07Z) - Leveraging Single-View Images for Unsupervised 3D Point Cloud Completion [53.93172686610741]
Cross-PCC is an unsupervised point cloud completion method without requiring any 3D complete point clouds.
To take advantage of the complementary information from 2D images, we use a single-view RGB image to extract 2D features.
Our method even achieves comparable performance to some supervised methods.
arXiv Detail & Related papers (2022-12-01T15:11:21Z) - Dynamic 3D Scene Analysis by Point Cloud Accumulation [32.491921765128936]
Multi-beam LiDAR sensors are used on autonomous vehicles and mobile robots.
Each frame covers the scene sparsely, due to limited angular scanning resolution and occlusion.
We propose a method that exploits inductive biases of outdoor street scenes, including their geometric layout and object-level rigidity.
arXiv Detail & Related papers (2022-07-25T17:57:46Z) - Sparse Fuse Dense: Towards High Quality 3D Detection with Depth
Completion [31.52721107477401]
Current LiDAR-only 3D detection methods inevitably suffer from the sparsity of point clouds.
We present a novel multi-modal framework SFD (Sparse Fuse Dense), which utilizes pseudo point clouds generated from depth completion.
Our method holds the highest entry on the KITTI car 3D object detection leaderboard, demonstrating the effectiveness of our SFD.
arXiv Detail & Related papers (2022-03-18T07:56:35Z) - CrossPoint: Self-Supervised Cross-Modal Contrastive Learning for 3D
Point Cloud Understanding [2.8661021832561757]
CrossPoint is a simple cross-modal contrastive learning approach to learn transferable 3D point cloud representations.
Our approach outperforms the previous unsupervised learning methods on a diverse range of downstream tasks including 3D object classification and segmentation.
arXiv Detail & Related papers (2022-03-01T18:59:01Z) - Deep Point Cloud Reconstruction [74.694733918351]
Point cloud obtained from 3D scanning is often sparse, noisy, and irregular.
To cope with these issues, recent studies have been separately conducted to densify, denoise, and complete inaccurate point cloud.
We propose a deep point cloud reconstruction network consisting of two stages: 1) a 3D sparse stacked-hourglass network as for the initial densification and denoising, 2) a refinement via transformers converting the discrete voxels into 3D points.
arXiv Detail & Related papers (2021-11-23T07:53:28Z) - Lifting 2D Object Locations to 3D by Discounting LiDAR Outliers across
Objects and Views [70.1586005070678]
We present a system for automatically converting 2D mask object predictions and raw LiDAR point clouds into full 3D bounding boxes of objects.
Our method significantly outperforms previous work despite the fact that those methods use significantly more complex pipelines, 3D models and additional human-annotated external sources of prior information.
arXiv Detail & Related papers (2021-09-16T13:01:13Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.