Progressively Optimized Local Radiance Fields for Robust View Synthesis
- URL: http://arxiv.org/abs/2303.13791v1
- Date: Fri, 24 Mar 2023 04:03:55 GMT
- Title: Progressively Optimized Local Radiance Fields for Robust View Synthesis
- Authors: Andreas Meuleman and Yu-Lun Liu and Chen Gao and Jia-Bin Huang and
Changil Kim and Min H. Kim and Johannes Kopf
- Abstract summary: We present an algorithm for reconstructing the radiance field of a large-scale scene from a single casually captured video.
For handling unknown poses, we jointly estimate the camera poses with radiance field in a progressive manner.
For handling large unbounded scenes, we dynamically allocate new local radiance fields trained with frames within a temporal window.
- Score: 76.55036080270347
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present an algorithm for reconstructing the radiance field of a
large-scale scene from a single casually captured video. The task poses two
core challenges. First, most existing radiance field reconstruction approaches
rely on accurate pre-estimated camera poses from Structure-from-Motion
algorithms, which frequently fail on in-the-wild videos. Second, using a
single, global radiance field with finite representational capacity does not
scale to longer trajectories in an unbounded scene. For handling unknown poses,
we jointly estimate the camera poses with radiance field in a progressive
manner. We show that progressive optimization significantly improves the
robustness of the reconstruction. For handling large unbounded scenes, we
dynamically allocate new local radiance fields trained with frames within a
temporal window. This further improves robustness (e.g., performs well even
under moderate pose drifts) and allows us to scale to large scenes. Our
extensive evaluation on the Tanks and Temples dataset and our collected outdoor
dataset, Static Hikes, show that our approach compares favorably with the
state-of-the-art.
Related papers
- Cinematic Gaussians: Real-Time HDR Radiance Fields with Depth of Field [23.92087253022495]
Radiance field methods represent the state of the art in reconstructing complex scenes from multi-view photos.
Their reliance on a pinhole camera model, assuming all scene elements are in focus in the input images, presents practical challenges and complicates refocusing during novel-view synthesis.
We present a lightweight analytical based on 3D Gaussian Splatting that utilizes multi-view LDR images on varying exposure times, radiance of apertures, and focus distances as input to reconstruct a high-dynamic-range scene.
arXiv Detail & Related papers (2024-06-11T15:00:24Z) - MetaCap: Meta-learning Priors from Multi-View Imagery for Sparse-view Human Performance Capture and Rendering [91.76893697171117]
We propose a method for efficient and high-quality geometry recovery and novel view synthesis given very sparse or even a single view of the human.
Our key idea is to meta-learn the radiance field weights solely from potentially sparse multi-view videos.
We collect a new dataset, WildDynaCap, which contains subjects captured in, both, a dense camera dome and in-the-wild sparse camera rigs.
arXiv Detail & Related papers (2024-03-27T17:59:54Z) - RadSplat: Radiance Field-Informed Gaussian Splatting for Robust Real-Time Rendering with 900+ FPS [47.47003067842151]
We present RadSplat, a lightweight method for robust real-time rendering of complex scenes.
First, we use radiance fields as a prior and supervision signal for optimizing point-based scene representations, leading to improved quality and more robust optimization.
Next, we develop a novel pruning technique reducing the overall point count while maintaining high quality, leading to smaller and more compact scene representations with faster inference speeds.
arXiv Detail & Related papers (2024-03-20T17:59:55Z) - ExBluRF: Efficient Radiance Fields for Extreme Motion Blurred Images [58.24910105459957]
We present ExBluRF, a novel view synthesis method for extreme motion blurred images.
Our approach consists of two main components: 6-DOF camera trajectory-based motion blur formulation and voxel-based radiance fields.
Compared with the existing works, our approach restores much sharper 3D scenes with the order of 10 times less training time and GPU memory consumption.
arXiv Detail & Related papers (2023-09-16T11:17:25Z) - DiffDreamer: Towards Consistent Unsupervised Single-view Scene
Extrapolation with Conditional Diffusion Models [91.94566873400277]
DiffDreamer is an unsupervised framework capable of synthesizing novel views depicting a long camera trajectory.
We show that image-conditioned diffusion models can effectively perform long-range scene extrapolation while preserving consistency significantly better than prior GAN-based methods.
arXiv Detail & Related papers (2022-11-22T10:06:29Z) - SPARF: Neural Radiance Fields from Sparse and Noisy Poses [58.528358231885846]
We introduce Sparse Pose Adjusting Radiance Field (SPARF) to address the challenge of novel-view synthesis.
Our approach exploits multi-view geometry constraints in order to jointly learn the NeRF and refine the camera poses.
arXiv Detail & Related papers (2022-11-21T18:57:47Z) - ERF: Explicit Radiance Field Reconstruction From Scratch [12.254150867994163]
We propose a novel explicit dense 3D reconstruction approach that processes a set of images of a scene with sensor poses and calibrations and estimates a photo-real digital model.
One of the key innovations is that the underlying volumetric representation is completely explicit.
We show that our method is general and practical. It does not require a highly controlled lab setup for capturing, but allows for reconstructing scenes with a vast variety of objects.
arXiv Detail & Related papers (2022-02-28T19:37:12Z) - Dense Depth Priors for Neural Radiance Fields from Sparse Input Views [37.92064060160628]
We propose a method to synthesize novel views of whole rooms from an order of magnitude fewer images.
Our method enables data-efficient novel view synthesis on challenging indoor scenes, using as few as 18 images for an entire scene.
arXiv Detail & Related papers (2021-12-06T19:00:02Z) - GNeRF: GAN-based Neural Radiance Field without Posed Camera [67.80805274569354]
We introduce GNeRF, a framework to marry Generative Adversarial Networks (GAN) with Neural Radiance Field reconstruction for the complex scenarios with unknown and even randomly camera poses.
Our approach outperforms the baselines favorably in those scenes with repeated patterns or even low textures that are regarded as extremely challenging before.
arXiv Detail & Related papers (2021-03-29T13:36:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.