Car-Studio: Learning Car Radiance Fields from Single-View and Endless
In-the-wild Images
- URL: http://arxiv.org/abs/2307.14009v1
- Date: Wed, 26 Jul 2023 07:44:34 GMT
- Title: Car-Studio: Learning Car Radiance Fields from Single-View and Endless
In-the-wild Images
- Authors: Tianyu Liu, Hao Zhao, Yang Yu, Guyue Zhou, Ming Liu
- Abstract summary: In this letter, we propose a pipeline for learning unconstrained images and building a dataset from processed images.
To meet the requirements of the simulator, we design a radiation field of the vehicle, a crucial part of the urban scene foreground.
Using the datasets built from in-the-wild images, our method gradually presents a controllable appearance editing function.
- Score: 16.075690774805622
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compositional neural scene graph studies have shown that radiance fields can
be an efficient tool in an editable autonomous driving simulator. However,
previous studies learned within a sequence of autonomous driving datasets,
resulting in unsatisfactory blurring when rotating the car in the simulator. In
this letter, we propose a pipeline for learning unconstrained images and
building a dataset from processed images. To meet the requirements of the
simulator, which demands that the vehicle maintain clarity when the perspective
changes and that the contour remains sharp from the background to avoid
artifacts when editing, we design a radiation field of the vehicle, a crucial
part of the urban scene foreground. Through experiments, we demonstrate that
our model achieves competitive performance compared to baselines. Using the
datasets built from in-the-wild images, our method gradually presents a
controllable appearance editing function. We will release the dataset and code
on https://lty2226262.github.io/car-studio/ to facilitate further research in
the field.
Related papers
- Learning autonomous driving from aerial imagery [67.06858775696453]
Photogrammetric simulators allow the synthesis of novel views through the transformation of pre-generated assets into novel views.
We use a Neural Radiance Field (NeRF) as an intermediate representation to synthesize novel views from the point of view of a ground vehicle.
arXiv Detail & Related papers (2024-10-18T05:09:07Z) - SimGen: Simulator-conditioned Driving Scene Generation [50.03358485083602]
We introduce a simulator-conditioned scene generation framework called SimGen.
SimGen learns to generate diverse driving scenes by mixing data from the simulator and the real world.
It achieves superior generation quality and diversity while preserving controllability based on the text prompt and the layout pulled from a simulator.
arXiv Detail & Related papers (2024-06-13T17:58:32Z) - GarchingSim: An Autonomous Driving Simulator with Photorealistic Scenes
and Minimalist Workflow [24.789118651720045]
We introduce an autonomous driving simulator with photorealistic scenes.
The simulator is able to communicate with external algorithms through ROS2 or Socket.IO.
We implement a highly accurate vehicle dynamics model within the simulator to enhance the realism of the vehicle's physical effects.
arXiv Detail & Related papers (2024-01-28T23:26:15Z) - Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving
Without Real Data [56.49494318285391]
We present Sim2Seg, a re-imagining of RCAN that crosses the visual reality gap for off-road autonomous driving.
This is done by learning to translate randomized simulation images into simulated segmentation and depth maps.
This allows us to train an end-to-end RL policy in simulation, and directly deploy in the real-world.
arXiv Detail & Related papers (2022-10-25T17:50:36Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - Data generation using simulation technology to improve perception
mechanism of autonomous vehicles [0.0]
We will demonstrate the effectiveness of combining data gathered from the real world with data generated in the simulated world to train perception systems.
We will also propose a multi-level deep learning perception framework that aims to emulate a human learning experience.
arXiv Detail & Related papers (2022-07-01T03:42:33Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving [27.948417322786575]
We present a simple yet effective approach to generate realistic scenario sensor data.
Our approach uses texture-mapped surfels to efficiently reconstruct the scene from an initial vehicle pass or set of passes.
We then leverage a SurfelGAN network to reconstruct realistic camera images for novel positions and orientations of the self-driving vehicle.
arXiv Detail & Related papers (2020-05-08T04:01:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.