GeoSim: Photorealistic Image Simulation with Geometry-Aware Composition
- URL: http://arxiv.org/abs/2101.06543v1
- Date: Sat, 16 Jan 2021 23:00:33 GMT
- Title: GeoSim: Photorealistic Image Simulation with Geometry-Aware Composition
- Authors: Yun Chen, Frieda Rong, Shivam Duggal, Shenlong Wang, Xinchen Yan,
Sivabalan Manivasagam, Shangjie Xue, Ersin Yumer, Raquel Urtasun
- Abstract summary: We present GeoSim, a geometry-aware image composition process that synthesizes novel urban driving scenes.
We first build a diverse bank of 3D objects with both realistic geometry and appearance from sensor data.
The resulting synthetic images are photorealistic, traffic-aware, and geometrically consistent, allowing image simulation to scale to complex use cases.
- Score: 81.24107630746508
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scalable sensor simulation is an important yet challenging open problem for
safety-critical domains such as self-driving. Current work in image simulation
either fail to be photorealistic or do not model the 3D environment and the
dynamic objects within, losing high-level control and physical realism. In this
paper, we present GeoSim, a geometry-aware image composition process that
synthesizes novel urban driving scenes by augmenting existing images with
dynamic objects extracted from other scenes and rendered at novel poses.
Towards this goal, we first build a diverse bank of 3D objects with both
realistic geometry and appearance from sensor data. During simulation, we
perform a novel geometry-aware simulation-by-composition procedure which 1)
proposes plausible and realistic object placements into a given scene, 2)
renders novel views of dynamic objects from the asset bank, and 3) composes and
blends the rendered image segments. The resulting synthetic images are
photorealistic, traffic-aware, and geometrically consistent, allowing image
simulation to scale to complex use cases. We demonstrate two such important
applications: long-range realistic video simulation across multiple camera
sensors, and synthetic data generation for data augmentation on downstream
segmentation tasks.
Related papers
- Automated 3D Physical Simulation of Open-world Scene with Gaussian Splatting [22.40115216094332]
We present Sim Anything, a physics-based approach that endows static 3D objects with interactive dynamics.
Inspired by human visual reasoning, we propose MLLM-based Physical Property Perception.
We also simulate objects in an open-world scene with particles sampled via the Physical-Geometric Adaptive Sampling.
arXiv Detail & Related papers (2024-11-19T12:52:21Z) - DrivingSphere: Building a High-fidelity 4D World for Closed-loop Simulation [54.02069690134526]
We propose DrivingSphere, a realistic and closed-loop simulation framework.
Its core idea is to build 4D world representation and generate real-life and controllable driving scenarios.
By providing a dynamic and realistic simulation environment, DrivingSphere enables comprehensive testing and validation of autonomous driving algorithms.
arXiv Detail & Related papers (2024-11-18T03:00:33Z) - URDFormer: A Pipeline for Constructing Articulated Simulation Environments from Real-World Images [39.0780707100513]
We present an integrated end-to-end pipeline that generates simulation scenes complete with articulated kinematic and dynamic structures from real-world images.
In doing so, our work provides both a pipeline for large-scale generation of simulation environments and an integrated system for training robust robotic control policies.
arXiv Detail & Related papers (2024-05-19T20:01:29Z) - Zero-Shot Multi-Object Scene Completion [59.325611678171974]
We present a 3D scene completion method that recovers the complete geometry of multiple unseen objects in complex scenes from a single RGB-D image.
Our method outperforms the current state-of-the-art on both synthetic and real-world datasets.
arXiv Detail & Related papers (2024-03-21T17:59:59Z) - Reconstructing Objects in-the-wild for Realistic Sensor Simulation [41.55571880832957]
We present NeuSim, a novel approach that estimates accurate geometry and realistic appearance from sparse in-the-wild data.
We model the object appearance with a robust physics-inspired reflectance representation effective for in-the-wild data.
Our experiments show that NeuSim has strong view synthesis performance on challenging scenarios with sparse training views.
arXiv Detail & Related papers (2023-11-09T18:58:22Z) - A Shared Representation for Photorealistic Driving Simulators [83.5985178314263]
We propose to improve the quality of generated images by rethinking the discriminator architecture.
The focus is on the class of problems where images are generated given semantic inputs, such as scene segmentation maps or human body poses.
We aim to learn a shared latent representation that encodes enough information to jointly do semantic segmentation, content reconstruction, along with a coarse-to-fine grained adversarial reasoning.
arXiv Detail & Related papers (2021-12-09T18:59:21Z) - Photorealism in Driving Simulations: Blending Generative Adversarial
Image Synthesis with Rendering [0.0]
We introduce a hybrid generative neural graphics pipeline for improving the visual fidelity of driving simulations.
We form 2D semantic images from 3D scenery consisting of simple object models without textures.
These semantic images are then converted into photorealistic RGB images with a state-of-the-art Generative Adrial Network (GAN) trained on real-world driving scenes.
arXiv Detail & Related papers (2020-07-31T03:25:17Z) - Towards Realistic 3D Embedding via View Alignment [53.89445873577063]
This paper presents an innovative View Alignment GAN (VA-GAN) that composes new images by embedding 3D models into 2D background images realistically and automatically.
VA-GAN consists of a texture generator and a differential discriminator that are inter-connected and end-to-end trainable.
arXiv Detail & Related papers (2020-07-14T14:45:00Z) - LiDARsim: Realistic LiDAR Simulation by Leveraging the Real World [84.57894492587053]
We develop a novel simulator that captures both the power of physics-based and learning-based simulation.
We first utilize ray casting over the 3D scene and then use a deep neural network to produce deviations from the physics-based simulation.
We showcase LiDARsim's usefulness for perception algorithms-testing on long-tail events and end-to-end closed-loop evaluation on safety-critical scenarios.
arXiv Detail & Related papers (2020-06-16T17:44:35Z) - SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving [27.948417322786575]
We present a simple yet effective approach to generate realistic scenario sensor data.
Our approach uses texture-mapped surfels to efficiently reconstruct the scene from an initial vehicle pass or set of passes.
We then leverage a SurfelGAN network to reconstruct realistic camera images for novel positions and orientations of the self-driving vehicle.
arXiv Detail & Related papers (2020-05-08T04:01:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.