Minimal Solvers for Single-View Lens-Distorted Camera Auto-Calibration
- URL: http://arxiv.org/abs/2011.08988v1
- Date: Tue, 17 Nov 2020 22:32:17 GMT
- Title: Minimal Solvers for Single-View Lens-Distorted Camera Auto-Calibration
- Authors: Yaroslava Lochman, Oles Dobosevych, Rostyslav Hryniv, James Pritts
- Abstract summary: We show that solvers using feature combinations can recover more accurate calibrations than solvers using only one feature type.
State-of-the-art performance is demonstrated on a standard dataset of lens-distorted urban images.
- Score: 4.152165675786137
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes minimal solvers that use combinations of imaged
translational symmetries and parallel scene lines to jointly estimate lens
undistortion with either affine rectification or focal length and absolute
orientation. We use constraints provided by orthogonal scene planes to recover
the focal length. We show that solvers using feature combinations can recover
more accurate calibrations than solvers using only one feature type on scenes
that have a balance of lines and texture. We also show that the proposed
solvers are complementary and can be used together in a RANSAC-based estimator
to improve auto-calibration accuracy. State-of-the-art performance is
demonstrated on a standard dataset of lens-distorted urban images. The code is
available at https://github.com/ylochman/single-view-autocalib.
Related papers
- Vanishing Point Estimation in Uncalibrated Images with Prior Gravity
Direction [82.72686460985297]
We tackle the problem of estimating a Manhattan frame.
We derive two new 2-line solvers, one of which does not suffer from singularities affecting existing solvers.
We also design a new non-minimal method, running on an arbitrary number of lines, to boost the performance in local optimization.
arXiv Detail & Related papers (2023-08-21T13:03:25Z) - An Adaptive Method for Camera Attribution under Complex Radial
Distortion Corrections [77.34726150561087]
In-camera or out-camera software/firmware alters the supporting grid of the image so as to hamper PRNU-based camera attribution.
Existing solutions to deal with this problem try to invert/estimate the correction using radial transformations parameterized with few variables in order to restrain the computational load.
We propose an adaptive algorithm that by dividing the image into concentric annuli is able to deal with sophisticated corrections like those applied out-camera by third party software like Adobe Lightroom, Photoshop, Gimp and PT-Lens.
arXiv Detail & Related papers (2023-02-28T08:44:00Z) - TartanCalib: Iterative Wide-Angle Lens Calibration using Adaptive
SubPixel Refinement of AprilTags [23.568127229446965]
Calibrating wide-angle lenses with current state-of-the-art techniques yields poor results due to extreme distortion at the edge.
We present our methodology for accurate wide-angle calibration.
arXiv Detail & Related papers (2022-10-05T18:57:07Z) - Self-Calibrating Neural Radiance Fields [68.64327335620708]
We jointly learn the geometry of the scene and the accurate camera parameters without any calibration objects.
Our camera model consists of a pinhole model, a fourth order radial distortion, and a generic noise model that can learn arbitrary non-linear camera distortions.
arXiv Detail & Related papers (2021-08-31T13:34:28Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - CRLF: Automatic Calibration and Refinement based on Line Feature for
LiDAR and Camera in Road Scenes [16.201111055979453]
We propose a novel method to calibrate the extrinsic parameter for LiDAR and camera in road scenes.
Our method introduces line features from static straight-line-shaped objects such as road lanes and poles in both image and point cloud.
We conduct extensive experiments on KITTI and our in-house dataset, quantitative and qualitative results demonstrate the robustness and accuracy of our method.
arXiv Detail & Related papers (2021-03-08T06:02:44Z) - Making Affine Correspondences Work in Camera Geometry Computation [62.7633180470428]
Local features provide region-to-region rather than point-to-point correspondences.
We propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline.
Experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times.
arXiv Detail & Related papers (2020-07-20T12:07:48Z) - UnRectDepthNet: Self-Supervised Monocular Depth Estimation using a
Generic Framework for Handling Common Camera Distortion Models [8.484676769284578]
We propose a generic scale-aware self-supervised pipeline for estimating depth, euclidean distance, and visual odometry from unrectified monocular videos.
The proposed algorithm is evaluated further on the KITTI rectified dataset, and we achieve state-of-the-art results.
arXiv Detail & Related papers (2020-07-13T20:35:05Z) - Self-Calibration Supported Robust Projective Structure-from-Motion [80.15392629310507]
We propose a unified Structure-from-Motion (SfM) method, in which the matching process is supported by self-calibration constraints.
We show experimental results demonstrating robust multiview matching and accurate camera calibration by exploiting these constraints.
arXiv Detail & Related papers (2020-07-04T08:47:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.