RefiNeRF: Modelling dynamic neural radiance fields with inconsistent or
missing camera parameters
- URL: http://arxiv.org/abs/2303.08695v1
- Date: Wed, 15 Mar 2023 15:27:18 GMT
- Title: RefiNeRF: Modelling dynamic neural radiance fields with inconsistent or
missing camera parameters
- Authors: Shuja Khalid, Frank Rudzicz
- Abstract summary: Novel view synthesis (NVS) is a challenging task in computer vision that involves synthesizing new views of a scene from a limited set of input images.
We propose a novel technique that leverages unposed images from dynamic datasets, such as NVIDIA dynamic scenes, to learn camera parameters directly from data.
We demonstrate the effectiveness of our method on a variety of static and dynamic scenes and show that it outperforms traditional SfM and MVS approaches.
- Score: 16.7345472998388
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Novel view synthesis (NVS) is a challenging task in computer vision that
involves synthesizing new views of a scene from a limited set of input images.
Neural Radiance Fields (NeRF) have emerged as a powerful approach to address
this problem, but they require accurate knowledge of camera \textit{intrinsic}
and \textit{extrinsic} parameters. Traditionally, structure-from-motion (SfM)
and multi-view stereo (MVS) approaches have been used to extract camera
parameters, but these methods can be unreliable and may fail in certain cases.
In this paper, we propose a novel technique that leverages unposed images from
dynamic datasets, such as the NVIDIA dynamic scenes dataset, to learn camera
parameters directly from data. Our approach is highly extensible and can be
integrated into existing NeRF architectures with minimal modifications. We
demonstrate the effectiveness of our method on a variety of static and dynamic
scenes and show that it outperforms traditional SfM and MVS approaches. The
code for our method is publicly available at
\href{https://github.com/redacted/refinerf}{https://github.com/redacted/refinerf}.
Our approach offers a promising new direction for improving the accuracy and
robustness of NVS using NeRF, and we anticipate that it will be a valuable tool
for a wide range of applications in computer vision and graphics.
Related papers
- DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features [65.8738034806085]
DistillNeRF is a self-supervised learning framework for understanding 3D environments in autonomous driving scenes.
Our method is a generalizable feedforward model that predicts a rich neural scene representation from sparse, single-frame multi-view camera inputs.
arXiv Detail & Related papers (2024-06-17T21:15:13Z) - D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video [53.83936023443193]
This paper contributes to the field by introducing a new synthesis method for dynamic novel view from monocular video, such as smartphone captures.
Our approach represents the as a $textitdynamic neural point cloud$, an implicit time-conditioned point cloud that encodes local geometry and appearance in separate hash-encoded neural feature grids.
arXiv Detail & Related papers (2024-06-14T14:35:44Z) - CTNeRF: Cross-Time Transformer for Dynamic Neural Radiance Field from Monocular Video [25.551944406980297]
We propose a novel approach to generate high-quality novel views from monocular videos of complex and dynamic scenes.
We introduce a module that operates in both the time and frequency domains to aggregate the features of object motion.
Our experiments demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets.
arXiv Detail & Related papers (2024-01-10T00:40:05Z) - CF-NeRF: Camera Parameter Free Neural Radiance Fields with Incremental
Learning [23.080474939586654]
We propose a novel underlinecamera parameter underlinefree neural radiance field (CF-NeRF)
CF-NeRF incrementally reconstructs 3D representations and recovers the camera parameters inspired by incremental structure from motion.
Results demonstrate that CF-NeRF is robust to camera rotation and achieves state-of-the-art results without providing prior information and constraints.
arXiv Detail & Related papers (2023-12-14T09:09:31Z) - NeRFtrinsic Four: An End-To-End Trainable NeRF Jointly Optimizing
Diverse Intrinsic and Extrinsic Camera Parameters [7.165373389474194]
Novel view synthesis using neural radiance fields (NeRF) is the state-of-the-art technique for generating high-quality images from novel viewpoints.
Current research on the joint optimization of camera parameters and NeRF focuses on refining noisy extrinsic camera parameters.
We propose a novel end-to-end trainable approach called NeRFtrinsic Four to address these limitations.
arXiv Detail & Related papers (2023-03-16T15:44:31Z) - DynIBaR: Neural Dynamic Image-Based Rendering [79.44655794967741]
We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene.
We adopt a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views.
We demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets.
arXiv Detail & Related papers (2022-11-20T20:57:02Z) - Learning Dynamic View Synthesis With Few RGBD Cameras [60.36357774688289]
We propose to utilize RGBD cameras to synthesize free-viewpoint videos of dynamic indoor scenes.
We generate point clouds from RGBD frames and then render them into free-viewpoint videos via a neural feature.
We introduce a simple Regional Depth-Inpainting module that adaptively inpaints missing depth values to render complete novel views.
arXiv Detail & Related papers (2022-04-22T03:17:35Z) - BARF: Bundle-Adjusting Neural Radiance Fields [104.97810696435766]
We propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect camera poses.
BARF can effectively optimize the neural scene representations and resolve large camera pose misalignment at the same time.
This enables view synthesis and localization of video sequences from unknown camera poses, opening up new avenues for visual localization systems.
arXiv Detail & Related papers (2021-04-13T17:59:51Z) - NeRF--: Neural Radiance Fields Without Known Camera Parameters [31.01560143595185]
This paper tackles the problem of novel view synthesis (NVS) from 2D images without known camera poses and intrinsics.
We propose an end-to-end framework, termed NeRF--, for training NeRF models given only RGB images.
arXiv Detail & Related papers (2021-02-14T03:52:34Z) - D-NeRF: Neural Radiance Fields for Dynamic Scenes [72.75686949608624]
We introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain.
D-NeRF reconstructs images of objects under rigid and non-rigid motions from a camera moving around the scene.
We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
arXiv Detail & Related papers (2020-11-27T19:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.