CamP: Camera Preconditioning for Neural Radiance Fields
- URL: http://arxiv.org/abs/2308.10902v2
- Date: Wed, 30 Aug 2023 23:28:53 GMT
- Title: CamP: Camera Preconditioning for Neural Radiance Fields
- Authors: Keunhong Park, Philipp Henzler, Ben Mildenhall, Jonathan T. Barron,
Ricardo Martin-Brualla
- Abstract summary: NeRFs can be optimized to obtain high-fidelity 3D scene reconstructions of objects and large-scale scenes.
Extrinsic and intrinsic camera parameters are usually estimated using Structure-from-Motion (SfM) methods as a pre-processing step to NeRF.
We propose using a proxy problem to compute a whitening transform that eliminates the correlation between camera parameters and normalizes their effects.
- Score: 56.46526219931002
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRF) can be optimized to obtain high-fidelity 3D
scene reconstructions of objects and large-scale scenes. However, NeRFs require
accurate camera parameters as input -- inaccurate camera parameters result in
blurry renderings. Extrinsic and intrinsic camera parameters are usually
estimated using Structure-from-Motion (SfM) methods as a pre-processing step to
NeRF, but these techniques rarely yield perfect estimates. Thus, prior works
have proposed jointly optimizing camera parameters alongside a NeRF, but these
methods are prone to local minima in challenging settings. In this work, we
analyze how different camera parameterizations affect this joint optimization
problem, and observe that standard parameterizations exhibit large differences
in magnitude with respect to small perturbations, which can lead to an
ill-conditioned optimization problem. We propose using a proxy problem to
compute a whitening transform that eliminates the correlation between camera
parameters and normalizes their effects, and we propose to use this transform
as a preconditioner for the camera parameters during joint optimization. Our
preconditioned camera optimization significantly improves reconstruction
quality on scenes from the Mip-NeRF 360 dataset: we reduce error rates (RMSE)
by 67% compared to state-of-the-art NeRF approaches that do not optimize for
cameras like Zip-NeRF, and by 29% relative to state-of-the-art joint
optimization approaches using the camera parameterization of SCNeRF. Our
approach is easy to implement, does not significantly increase runtime, can be
applied to a wide variety of camera parameterizations, and can
straightforwardly be incorporated into other NeRF-like models.
Related papers
- RS-NeRF: Neural Radiance Fields from Rolling Shutter Images [30.719764073204423]
We present RS-NeRF, a method designed to synthesize normal images from novel views using input with RS distortions.
This involves a physical model that replicates the image formation process under RS conditions.
We further address the inherent shortcomings of the basic RS-NeRF model by delving into the RS characteristics and developing algorithms to enhance its functionality.
arXiv Detail & Related papers (2024-07-14T16:27:11Z) - CF-NeRF: Camera Parameter Free Neural Radiance Fields with Incremental
Learning [23.080474939586654]
We propose a novel underlinecamera parameter underlinefree neural radiance field (CF-NeRF)
CF-NeRF incrementally reconstructs 3D representations and recovers the camera parameters inspired by incremental structure from motion.
Results demonstrate that CF-NeRF is robust to camera rotation and achieves state-of-the-art results without providing prior information and constraints.
arXiv Detail & Related papers (2023-12-14T09:09:31Z) - Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion [67.15935067326662]
Event cameras offer low power, low latency, high temporal resolution and high dynamic range.
NeRF is seen as the leading candidate for efficient and effective scene representation.
We propose Robust e-NeRF, a novel method to directly and robustly reconstruct NeRFs from moving event cameras.
arXiv Detail & Related papers (2023-09-15T17:52:08Z) - MC-NeRF: Multi-Camera Neural Radiance Fields for Multi-Camera Image Acquisition Systems [22.494866649536018]
Neural Radiance Fields (NeRF) use multi-view images for 3D scene representation, demonstrating remarkable performance.
Most previous NeRF-based methods assume a unique camera and rarely consider multi-camera scenarios.
We propose MC-NeRF, a method that enables joint optimization of both intrinsic and extrinsic parameters alongside NeRF.
arXiv Detail & Related papers (2023-09-14T16:40:44Z) - Density Invariant Contrast Maximization for Neuromorphic Earth
Observations [55.970609838687864]
Contrast (CMax) techniques are widely used in event-based vision systems to estimate the motion parameters of the camera and generate high-contrast images.
These techniques are noise-intolerance and suffer from the multiple extrema problem which arises when the scene contains more noisy events than structure.
Our proposed solution overcomes the multiple extrema and noise-intolerance problems by correcting the warped event before calculating the contrast.
arXiv Detail & Related papers (2023-04-27T12:17:40Z) - NeRFtrinsic Four: An End-To-End Trainable NeRF Jointly Optimizing
Diverse Intrinsic and Extrinsic Camera Parameters [7.165373389474194]
Novel view synthesis using neural radiance fields (NeRF) is the state-of-the-art technique for generating high-quality images from novel viewpoints.
Current research on the joint optimization of camera parameters and NeRF focuses on refining noisy extrinsic camera parameters.
We propose a novel end-to-end trainable approach called NeRFtrinsic Four to address these limitations.
arXiv Detail & Related papers (2023-03-16T15:44:31Z) - Self-Calibrating Neural Radiance Fields [68.64327335620708]
We jointly learn the geometry of the scene and the accurate camera parameters without any calibration objects.
Our camera model consists of a pinhole model, a fourth order radial distortion, and a generic noise model that can learn arbitrary non-linear camera distortions.
arXiv Detail & Related papers (2021-08-31T13:34:28Z) - FLEX: Parameter-free Multi-view 3D Human Motion Reconstruction [70.09086274139504]
Multi-view algorithms strongly depend on camera parameters, in particular, the relative positions among the cameras.
We introduce FLEX, an end-to-end parameter-free multi-view model.
We demonstrate results on the Human3.6M and KTH Multi-view Football II datasets.
arXiv Detail & Related papers (2021-05-05T09:08:12Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - NeRF--: Neural Radiance Fields Without Known Camera Parameters [31.01560143595185]
This paper tackles the problem of novel view synthesis (NVS) from 2D images without known camera poses and intrinsics.
We propose an end-to-end framework, termed NeRF--, for training NeRF models given only RGB images.
arXiv Detail & Related papers (2021-02-14T03:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.