Urban Radiance Fields
- URL: http://arxiv.org/abs/2111.14643v1
- Date: Mon, 29 Nov 2021 15:58:16 GMT
- Title: Urban Radiance Fields
- Authors: Konstantinos Rematas, Andrew Liu, Pratul P. Srinivasan, Jonathan T.
Barron, Andrea Tagliasacchi, Thomas Funkhouser, Vittorio Ferrari
- Abstract summary: We perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments.
Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings.
Each of these three extensions provides significant performance improvements in experiments on Street View data.
- Score: 77.43604458481637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of this work is to perform 3D reconstruction and novel view
synthesis from data captured by scanning platforms commonly deployed for world
mapping in urban outdoor environments (e.g., Street View). Given a sequence of
posed RGB images and lidar sweeps acquired by cameras and scanners moving
through an outdoor scene, we produce a model from which 3D surfaces can be
extracted and novel RGB images can be synthesized. Our approach extends Neural
Radiance Fields, which has been demonstrated to synthesize realistic novel
images for small scenes in controlled settings, with new methods for leveraging
asynchronously captured lidar data, for addressing exposure variation between
captured images, and for leveraging predicted image segmentations to supervise
densities on rays pointing at the sky. Each of these three extensions provides
significant performance improvements in experiments on Street View data. Our
system produces state-of-the-art 3D surface reconstructions and synthesizes
higher quality novel views in comparison to both traditional methods
(e.g.~COLMAP) and recent neural representations (e.g.~Mip-NeRF).
Related papers
- ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis [63.169364481672915]
We propose textbfViewCrafter, a novel method for synthesizing high-fidelity novel views of generic scenes from single or sparse images.
Our method takes advantage of the powerful generation capabilities of video diffusion model and the coarse 3D clues offered by point-based representation to generate high-quality video frames.
arXiv Detail & Related papers (2024-09-03T16:53:19Z) - Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion [77.34078223594686]
We propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques.
Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner.
Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
arXiv Detail & Related papers (2024-01-19T16:15:37Z) - ReconFusion: 3D Reconstruction with Diffusion Priors [104.73604630145847]
We present ReconFusion to reconstruct real-world scenes using only a few photos.
Our approach leverages a diffusion prior for novel view synthesis, trained on synthetic and multiview datasets.
Our method synthesizes realistic geometry and texture in underconstrained regions while preserving the appearance of observed regions.
arXiv Detail & Related papers (2023-12-05T18:59:58Z) - rpcPRF: Generalizable MPI Neural Radiance Field for Satellite Camera [0.76146285961466]
This paper presents rpcPRF, a Multiplane Images (MPI) based Planar neural Radiance Field for Rational Polynomial Camera (RPC)
We propose to use reprojection supervision to induce the predicted MPI to learn the correct geometry between the 3D coordinates and the images.
We remove the stringent requirement of dense depth supervision from deep multiview-stereo-based methods by introducing rendering techniques of radiance fields.
arXiv Detail & Related papers (2023-10-11T04:05:11Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Enhancement of Novel View Synthesis Using Omnidirectional Image
Completion [61.78187618370681]
We present a method for synthesizing novel views from a single 360-degree RGB-D image based on the neural radiance field (NeRF)
Experiments demonstrated that the proposed method can synthesize plausible novel views while preserving the features of the scene for both artificial and real-world data.
arXiv Detail & Related papers (2022-03-18T13:49:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.