Generating Photo-realistic Images from LiDAR Point Clouds with
Generative Adversarial Networks
- URL: http://arxiv.org/abs/2112.11245v1
- Date: Mon, 20 Dec 2021 05:25:15 GMT
- Title: Generating Photo-realistic Images from LiDAR Point Clouds with
Generative Adversarial Networks
- Authors: Nuriel Shalom Mor
- Abstract summary: We created a dataset of point cloud image pairs and trained the GAN to predict images from LiDAR point clouds containing reflectance and distance information.
Our models learned how to predict realistically looking images from just point cloud data, even images with black cars.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We examined the feasibility of generative adversarial networks (GANs) to
generate photo-realistic images from LiDAR point clouds. For this purpose, we
created a dataset of point cloud image pairs and trained the GAN to predict
photorealistic images from LiDAR point clouds containing reflectance and
distance information. Our models learned how to predict realistically looking
images from just point cloud data, even images with black cars. Black cars are
difficult to detect directly from point clouds because of their low level of
reflectivity. This approach might be used in the future to perform visual
object recognition on photorealistic images generated from LiDAR point clouds.
In addition to the conventional LiDAR system, a second system that generates
photorealistic images from LiDAR point clouds would run simultaneously for
visual object recognition in real-time. In this way, we might preserve the
supremacy of LiDAR and benefit from using photo-realistic images for visual
object recognition without the usage of any camera. In addition, this approach
could be used to colorize point clouds without the usage of any camera images.
Related papers
- Towards Fusing Point Cloud and Visual Representations for Imitation Learning [57.886331184389604]
We propose FPV-Net, a novel imitation learning method that effectively combines the strengths of both point cloud and RGB modalities.
Our method conditions the point-cloud encoder on global and local image tokens using adaptive layer norm conditioning.
arXiv Detail & Related papers (2025-02-17T20:46:54Z) - Real-time Neural Rendering of LiDAR Point Clouds [0.2621434923709917]
A naive projection of the point cloud to the output view using 1x1 pixels is fast and retains the available detail, but also results in unintelligible renderings as background points leak in between the foreground pixels.
A deep convolutional model in the form of a U-Net is used to transform these projections into a realistic result.
We also describe a method to generate synthetic training data to deal with imperfectly-aligned ground truth images.
arXiv Detail & Related papers (2025-02-17T10:01:13Z) - HVDistill: Transferring Knowledge from Images to Point Clouds via Unsupervised Hybrid-View Distillation [106.09886920774002]
We present a hybrid-view-based knowledge distillation framework, termed HVDistill, to guide the feature learning of a point cloud neural network.
Our method achieves consistent improvements over the baseline trained from scratch and significantly out- performs the existing schemes.
arXiv Detail & Related papers (2024-03-18T14:18:08Z) - GAN-Based LiDAR Intensity Simulation [3.8697834534260447]
We train GANs to translate between camera images and LiDAR scans from real test drives.
We test the performance of the LiDAR simulation by testing how well an object detection network generalizes between real and synthetic point clouds.
arXiv Detail & Related papers (2023-11-26T20:44:09Z) - PRED: Pre-training via Semantic Rendering on LiDAR Point Clouds [18.840000859663153]
We propose PRED, a novel image-assisted pre-training framework for outdoor point clouds.
The main ingredient of our framework is a Birds-Eye-View (BEV) feature map conditioned semantic rendering.
We further enhance our model's performance by incorporating point-wise masking with a high mask ratio.
arXiv Detail & Related papers (2023-11-08T07:26:09Z) - UltraLiDAR: Learning Compact Representations for LiDAR Completion and
Generation [51.443788294845845]
We present UltraLiDAR, a data-driven framework for scene-level LiDAR completion, LiDAR generation, and LiDAR manipulation.
We show that by aligning the representation of a sparse point cloud to that of a dense point cloud, we can densify the sparse point clouds.
By learning a prior over the discrete codebook, we can generate diverse, realistic LiDAR point clouds for self-driving.
arXiv Detail & Related papers (2023-11-02T17:57:03Z) - NeRF-LiDAR: Generating Realistic LiDAR Point Clouds with Neural Radiance
Fields [20.887421720818892]
We present NeRF-LIDAR, a novel LiDAR simulation method that leverages real-world information to generate realistic LIDAR point clouds.
We verify the effectiveness of our NeRF-LiDAR by training different 3D segmentation models on the generated LiDAR point clouds.
arXiv Detail & Related papers (2023-04-28T12:41:28Z) - Point2Pix: Photo-Realistic Point Cloud Rendering via Neural Radiance
Fields [63.21420081888606]
Recent Radiance Fields and extensions are proposed to synthesize realistic images from 2D input.
We present Point2Pix as a novel point to link the 3D sparse point clouds with 2D dense image pixels.
arXiv Detail & Related papers (2023-03-29T06:26:55Z) - Ponder: Point Cloud Pre-training via Neural Rendering [93.34522605321514]
We propose a novel approach to self-supervised learning of point cloud representations by differentiable neural encoders.
The learned point-cloud can be easily integrated into various downstream tasks, including not only high-level rendering tasks like 3D detection and segmentation, but low-level tasks like 3D reconstruction and image rendering.
arXiv Detail & Related papers (2022-12-31T08:58:39Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - Lateral Ego-Vehicle Control without Supervision using Point Clouds [50.40632021583213]
Existing vision based supervised approaches to lateral vehicle control are capable of directly mapping RGB images to the appropriate steering commands.
This paper proposes a framework for training a more robust and scalable model for lateral vehicle control.
Online experiments show that the performance of our method is superior to that of the supervised model.
arXiv Detail & Related papers (2022-03-20T21:57:32Z) - Privacy Preserving Visual SLAM [11.80598014760818]
This study proposes a privacy-preserving Visual SLAM framework for estimating camera poses and performing bundle adjustment with mixed line and point clouds in real time.
Previous studies have proposed localization methods to estimate a camera pose using a line-cloud map for a single image or a reconstructed point cloud.
Our framework achieves the intended privacy-preserving formation and real-time performance using a line-cloud map.
arXiv Detail & Related papers (2020-07-20T18:00:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.