Self-Calibrating Neural Radiance Fields
- URL: http://arxiv.org/abs/2108.13826v2
- Date: Thu, 2 Sep 2021 13:26:43 GMT
- Title: Self-Calibrating Neural Radiance Fields
- Authors: Yoonwoo Jeong, Seokjun Ahn, Christopher Choy, Animashree Anandkumar,
Minsu Cho, Jaesik Park
- Abstract summary: We jointly learn the geometry of the scene and the accurate camera parameters without any calibration objects.
Our camera model consists of a pinhole model, a fourth order radial distortion, and a generic noise model that can learn arbitrary non-linear camera distortions.
- Score: 68.64327335620708
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this work, we propose a camera self-calibration algorithm for generic
cameras with arbitrary non-linear distortions. We jointly learn the geometry of
the scene and the accurate camera parameters without any calibration objects.
Our camera model consists of a pinhole model, a fourth order radial distortion,
and a generic noise model that can learn arbitrary non-linear camera
distortions. While traditional self-calibration algorithms mostly rely on
geometric constraints, we additionally incorporate photometric consistency.
This requires learning the geometry of the scene, and we use Neural Radiance
Fields (NeRF). We also propose a new geometric loss function, viz., projected
ray distance loss, to incorporate geometric consistency for complex non-linear
camera models. We validate our approach on standard real image datasets and
demonstrate that our model can learn the camera intrinsics and extrinsics
(pose) from scratch without COLMAP initialization. Also, we show that learning
accurate camera models in a differentiable manner allows us to improve PSNR
over baselines. Our module is an easy-to-use plugin that can be applied to NeRF
variants to improve performance. The code and data are currently available at
https://github.com/POSTECH-CVLab/SCNeRF.
Related papers
- GeoCalib: Learning Single-image Calibration with Geometric Optimization [89.84142934465685]
From a single image, visual cues can help deduce intrinsic and extrinsic camera parameters like the focal length and the gravity direction.
Current approaches to this problem are based on either classical geometry with lines and vanishing points or on deep neural networks trained end-to-end.
We introduce GeoCalib, a deep neural network that leverages universal rules of 3D geometry through an optimization process.
arXiv Detail & Related papers (2024-09-10T17:59:55Z) - Camera Calibration through Geometric Constraints from Rotation and
Projection Matrices [4.100632594106989]
We propose a novel constraints-based loss for measuring the intrinsic and extrinsic parameters of a camera.
Our methodology is a hybrid approach that employs the learning power of a neural network to estimate the desired parameters.
Our proposed approach demonstrates improvements across all parameters when compared to the state-of-the-art (SOTA) benchmarks.
arXiv Detail & Related papers (2024-02-13T13:07:34Z) - How to turn your camera into a perfect pinhole model [0.38233569758620056]
We propose a novel approach that involves a pre-processing step to remove distortions from images.
Our method does not need to assume any distortion model and can be applied to severely warped images.
This model allows for a serious upgrade of many algorithms and applications.
arXiv Detail & Related papers (2023-09-20T13:54:29Z) - Towards Nonlinear-Motion-Aware and Occlusion-Robust Rolling Shutter
Correction [54.00007868515432]
Existing methods face challenges in estimating the accurate correction field due to the uniform velocity assumption.
We propose a geometry-based Quadratic Rolling Shutter (QRS) motion solver, which precisely estimates the high-order correction field of individual pixels.
Our method surpasses the state-of-the-art by +4.98, +0.77, and +4.33 of PSNR on Carla-RS, Fastec-RS, and BS-RSC datasets, respectively.
arXiv Detail & Related papers (2023-03-31T15:09:18Z) - Deep Learning for Camera Calibration and Beyond: A Survey [100.75060862015945]
Camera calibration involves estimating camera parameters to infer geometric features from captured sequences.
Recent efforts show that learning-based solutions have the potential to be used in place of the repeatability works of manual calibrations.
arXiv Detail & Related papers (2023-03-19T04:00:05Z) - Multi-task Learning for Camera Calibration [3.274290296343038]
We present a unique method for predicting intrinsic (principal point offset and focal length) and extrinsic (baseline, pitch, and translation) properties from a pair of images.
By reconstructing the 3D points using a camera model neural network and then using the loss in reconstruction to obtain the camera specifications, this innovative camera projection loss (CPL) method allows us that the desired parameters should be estimated.
arXiv Detail & Related papers (2022-11-22T17:39:31Z) - Self-Supervised Camera Self-Calibration from Video [34.35533943247917]
We propose a learning algorithm to regress per-sequence calibration parameters using an efficient family of general camera models.
Our procedure achieves self-calibration results with sub-pixel reprojection error, outperforming other learning-based methods.
arXiv Detail & Related papers (2021-12-06T19:42:05Z) - NeRF--: Neural Radiance Fields Without Known Camera Parameters [31.01560143595185]
This paper tackles the problem of novel view synthesis (NVS) from 2D images without known camera poses and intrinsics.
We propose an end-to-end framework, termed NeRF--, for training NeRF models given only RGB images.
arXiv Detail & Related papers (2021-02-14T03:52:34Z) - Wide-angle Image Rectification: A Survey [86.36118799330802]
wide-angle images contain distortions that violate the assumptions underlying pinhole camera models.
Image rectification, which aims to correct these distortions, can solve these problems.
We present a detailed description and discussion of the camera models used in different approaches.
Next, we review both traditional geometry-based image rectification methods and deep learning-based methods.
arXiv Detail & Related papers (2020-10-30T17:28:40Z) - Neural Ray Surfaces for Self-Supervised Learning of Depth and Ego-motion [51.19260542887099]
We show that self-supervision can be used to learn accurate depth and ego-motion estimation without prior knowledge of the camera model.
Inspired by the geometric model of Grossberg and Nayar, we introduce Neural Ray Surfaces (NRS), convolutional networks that represent pixel-wise projection rays.
We demonstrate the use of NRS for self-supervised learning of visual odometry and depth estimation from raw videos obtained using a wide variety of camera systems.
arXiv Detail & Related papers (2020-08-15T02:29:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.