S-NeRF: Neural Radiance Fields for Street Views
- URL: http://arxiv.org/abs/2303.00749v1
- Date: Wed, 1 Mar 2023 18:59:30 GMT
- Title: S-NeRF: Neural Radiance Fields for Street Views
- Authors: Ziyang Xie, Junge Zhang, Wenye Li, Feihu Zhang, Li Zhang
- Abstract summary: We propose a new street-view NeRF (S-NeRF) that considers novel view synthesis of both the large-scale background scenes and the foreground moving vehicles jointly.
Specifically, we improve the scene parameterization function and the camera poses for learning better neural representations from street views.
Our method beats the state-of-the-art rivals by reducing 7% to 40% of the mean-squared error in the street-view and a 45% PSNR gain for the moving vehicles rendering.
- Score: 23.550385153214688
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRFs) aim to synthesize novel views of objects and
scenes, given the object-centric camera views with large overlaps. However, we
conjugate that this paradigm does not fit the nature of the street views that
are collected by many self-driving cars from the large-scale unbounded scenes.
Also, the onboard cameras perceive scenes without much overlapping. Thus,
existing NeRFs often produce blurs, 'floaters' and other artifacts on
street-view synthesis. In this paper, we propose a new street-view NeRF
(S-NeRF) that considers novel view synthesis of both the large-scale background
scenes and the foreground moving vehicles jointly. Specifically, we improve the
scene parameterization function and the camera poses for learning better neural
representations from street views. We also use the the noisy and sparse LiDAR
points to boost the training and learn a robust geometry and reprojection based
confidence to address the depth outliers. Moreover, we extend our S-NeRF for
reconstructing moving vehicles that is impracticable for conventional NeRFs.
Thorough experiments on the large-scale driving datasets (e.g., nuScenes and
Waymo) demonstrate that our method beats the state-of-the-art rivals by
reducing 7% to 40% of the mean-squared error in the street-view synthesis and a
45% PSNR gain for the moving vehicles rendering.
Related papers
- NeRF On-the-go: Exploiting Uncertainty for Distractor-free NeRFs in the Wild [55.154625718222995]
We introduce NeRF On-the-go, a simple yet effective approach that enables the robust synthesis of novel views in complex, in-the-wild scenes.
Our method demonstrates a significant improvement over state-of-the-art techniques.
arXiv Detail & Related papers (2024-05-29T02:53:40Z) - LidaRF: Delving into Lidar for Neural Radiance Field on Street Scenes [73.65115834242866]
Photorealistic simulation plays a crucial role in applications such as autonomous driving.
However, reconstruction quality suffers on street scenes due to collinear camera motions and sparser samplings at higher speeds.
We propose several insights that allow a better utilization of Lidar data to improve NeRF quality on street scenes.
arXiv Detail & Related papers (2024-05-01T23:07:12Z) - SPARF: Neural Radiance Fields from Sparse and Noisy Poses [58.528358231885846]
We introduce Sparse Pose Adjusting Radiance Field (SPARF) to address the challenge of novel-view synthesis.
Our approach exploits multi-view geometry constraints in order to jointly learn the NeRF and refine the camera poses.
arXiv Detail & Related papers (2022-11-21T18:57:47Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields [43.69542675078766]
We present an extension of mip-NeRF that uses a non-linear scene parameterization, online distillation, and a novel distortion-based regularizer to overcome the challenges presented by unbounded scenes.
Our model, which we dub "mip-NeRF 360," reduces mean-squared error by 54% compared to mip-NeRF, and is able to produce realistic synthesized views and detailed depth maps.
arXiv Detail & Related papers (2021-11-23T18:51:18Z) - BARF: Bundle-Adjusting Neural Radiance Fields [104.97810696435766]
We propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect camera poses.
BARF can effectively optimize the neural scene representations and resolve large camera pose misalignment at the same time.
This enables view synthesis and localization of video sequences from unknown camera poses, opening up new avenues for visual localization systems.
arXiv Detail & Related papers (2021-04-13T17:59:51Z) - D-NeRF: Neural Radiance Fields for Dynamic Scenes [72.75686949608624]
We introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain.
D-NeRF reconstructs images of objects under rigid and non-rigid motions from a camera moving around the scene.
We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
arXiv Detail & Related papers (2020-11-27T19:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.