NOCaL: Calibration-Free Semi-Supervised Learning of Odometry and Camera
Intrinsics
- URL: http://arxiv.org/abs/2210.07435v2
- Date: Tue, 18 Oct 2022 06:52:22 GMT
- Title: NOCaL: Calibration-Free Semi-Supervised Learning of Odometry and Camera
Intrinsics
- Authors: Ryan Griffiths, Jack Naylor, Donald G. Dansereau
- Abstract summary: We present NOCaL, Neural odometry and using Light fields, a semi-supervised learning architecture capable of interpreting previously unseen cameras without calibration.
We demonstrate NOCaL synthesis on rendered and captured imagery using conventional cameras, demonstrating calibration-free odometry and novel view geometries.
- Score: 2.298932494750101
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There are a multitude of emerging imaging technologies that could benefit
robotics. However the need for bespoke models, calibration and low-level
processing represents a key barrier to their adoption. In this work we present
NOCaL, Neural odometry and Calibration using Light fields, a semi-supervised
learning architecture capable of interpreting previously unseen cameras without
calibration. NOCaL learns to estimate camera parameters, relative pose, and
scene appearance. It employs a scene-rendering hypernetwork pretrained on a
large number of existing cameras and scenes, and adapts to previously unseen
cameras using a small supervised training set to enforce metric scale. We
demonstrate NOCaL on rendered and captured imagery using conventional cameras,
demonstrating calibration-free odometry and novel view synthesis. This work
represents a key step toward automating the interpretation of general camera
geometries and emerging imaging technologies.
Related papers
- Inverting the Imaging Process by Learning an Implicit Camera Model [73.81635386829846]
This paper proposes a novel implicit camera model which represents the physical imaging process of a camera as a deep neural network.
We demonstrate the power of this new implicit camera model on two inverse imaging tasks.
arXiv Detail & Related papers (2023-04-25T11:55:03Z) - Deep Learning for Camera Calibration and Beyond: A Survey [100.75060862015945]
Camera calibration involves estimating camera parameters to infer geometric features from captured sequences.
Recent efforts show that learning-based solutions have the potential to be used in place of the repeatability works of manual calibrations.
arXiv Detail & Related papers (2023-03-19T04:00:05Z) - A Deep Perceptual Measure for Lens and Camera Calibration [35.03926427249506]
In place of the traditional multi-image calibration process, we propose to infer the camera calibration parameters directly from a single image.
We train this network using automatically generated samples from a large-scale panorama dataset.
We conduct a large-scale human perception study where we ask participants to judge the realism of 3D objects composited with correct and biased camera calibration parameters.
arXiv Detail & Related papers (2022-08-25T18:40:45Z) - Self-Supervised Camera Self-Calibration from Video [34.35533943247917]
We propose a learning algorithm to regress per-sequence calibration parameters using an efficient family of general camera models.
Our procedure achieves self-calibration results with sub-pixel reprojection error, outperforming other learning-based methods.
arXiv Detail & Related papers (2021-12-06T19:42:05Z) - Self-Calibrating Neural Radiance Fields [68.64327335620708]
We jointly learn the geometry of the scene and the accurate camera parameters without any calibration objects.
Our camera model consists of a pinhole model, a fourth order radial distortion, and a generic noise model that can learn arbitrary non-linear camera distortions.
arXiv Detail & Related papers (2021-08-31T13:34:28Z) - Unsupervised Learning of Depth Estimation and Visual Odometry for Sparse
Light Field Cameras [0.0]
We generalise techniques from unsupervised learning to allow a robot to autonomously interpret new kinds of cameras.
We consider emerging sparse light field (LF) cameras, which capture a subset of the 4D LF function describing the set of light rays passing through a plane.
We introduce a generalised encoding of sparse LFs that allows unsupervised learning of odometry and depth.
arXiv Detail & Related papers (2021-03-21T07:13:14Z) - Neural Ray Surfaces for Self-Supervised Learning of Depth and Ego-motion [51.19260542887099]
We show that self-supervision can be used to learn accurate depth and ego-motion estimation without prior knowledge of the camera model.
Inspired by the geometric model of Grossberg and Nayar, we introduce Neural Ray Surfaces (NRS), convolutional networks that represent pixel-wise projection rays.
We demonstrate the use of NRS for self-supervised learning of visual odometry and depth estimation from raw videos obtained using a wide variety of camera systems.
arXiv Detail & Related papers (2020-08-15T02:29:13Z) - Infrastructure-based Multi-Camera Calibration using Radial Projections [117.22654577367246]
Pattern-based calibration techniques can be used to calibrate the intrinsics of the cameras individually.
Infrastucture-based calibration techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM or Structure-from-Motion.
We propose to fully calibrate a multi-camera system from scratch using an infrastructure-based approach.
arXiv Detail & Related papers (2020-07-30T09:21:04Z) - DeProCams: Simultaneous Relighting, Compensation and Shape
Reconstruction for Projector-Camera Systems [91.45207885902786]
We propose a novel end-to-end trainable model named DeProCams to learn the photometric and geometric mappings of ProCams.
DeProCams explicitly decomposes the projector-camera image mappings into three subprocesses: shading attributes estimation, rough direct light estimation and photorealistic neural rendering.
In our experiments, DeProCams shows clear advantages over previous arts with promising quality and being fully differentiable.
arXiv Detail & Related papers (2020-03-06T05:49:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.