Superaccurate Camera Calibration via Inverse Rendering
- URL: http://arxiv.org/abs/2003.09177v1
- Date: Fri, 20 Mar 2020 10:26:16 GMT
- Title: Superaccurate Camera Calibration via Inverse Rendering
- Authors: Morten Hannemose and Jakob Wilm and Jeppe Revall Frisvad
- Abstract summary: We propose a new method for camera calibration using the principle of inverse rendering.
Instead of relying solely on detected feature points, we use an estimate of the internal parameters and the pose of the calibration object to implicitly render a non-photorealistic equivalent of the optical features.
- Score: 0.19336815376402716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The most prevalent routine for camera calibration is based on the detection
of well-defined feature points on a purpose-made calibration artifact. These
could be checkerboard saddle points, circles, rings or triangles, often printed
on a planar structure. The feature points are first detected and then used in a
nonlinear optimization to estimate the internal camera parameters.We propose a
new method for camera calibration using the principle of inverse rendering.
Instead of relying solely on detected feature points, we use an estimate of the
internal parameters and the pose of the calibration object to implicitly render
a non-photorealistic equivalent of the optical features. This enables us to
compute pixel-wise differences in the image domain without interpolation
artifacts. We can then improve our estimate of the internal parameters by
minimizing pixel-wise least-squares differences. In this way, our model
optimizes a meaningful metric in the image space assuming normally distributed
noise characteristic for camera sensors.We demonstrate using synthetic and real
camera images that our method improves the accuracy of estimated camera
parameters as compared with current state-of-the-art calibration routines. Our
method also estimates these parameters more robustly in the presence of noise
and in situations where the number of calibration images is limited.
Related papers
- Accurate Checkerboard Corner Detection under Defoucs [0.0]
This paper focuses on enhancing feature extraction for chessboard corner detection.
We propose a novel sub-pixel refinement approach based on symmetry, which signifi cantly improves accuracy for visible light cameras.
Our method demonstrates superior performance, achiev ing substantial accuracy improvements over existing tech niques.
arXiv Detail & Related papers (2024-10-17T09:23:30Z) - Learning to Make Keypoints Sub-Pixel Accurate [80.55676599677824]
This work addresses the challenge of sub-pixel accuracy in detecting 2D local features.
We propose a novel network that enhances any detector with sub-pixel precision by learning an offset vector for detected features.
arXiv Detail & Related papers (2024-07-16T12:39:56Z) - Single-image camera calibration with model-free distortion correction [0.0]
This paper proposes a method for estimating the complete set of calibration parameters from a single image of a planar speckle pattern covering the entire sensor.
The correspondence between image points and physical points on the calibration target is obtained using Digital Image Correlation.
At the end of the procedure, a dense and uniform model-free distortion map is obtained over the entire image.
arXiv Detail & Related papers (2024-03-02T16:51:35Z) - E-Calib: A Fast, Robust and Accurate Calibration Toolbox for Event
Cameras [34.71767308204867]
We present E-Calib, a novel, fast, robust, and accurate calibration toolbox for event cameras.
The proposed method is tested in a variety of rigorous experiments for different event camera models.
arXiv Detail & Related papers (2023-06-15T12:16:38Z) - A Deep Perceptual Measure for Lens and Camera Calibration [35.03926427249506]
In place of the traditional multi-image calibration process, we propose to infer the camera calibration parameters directly from a single image.
We train this network using automatically generated samples from a large-scale panorama dataset.
We conduct a large-scale human perception study where we ask participants to judge the realism of 3D objects composited with correct and biased camera calibration parameters.
arXiv Detail & Related papers (2022-08-25T18:40:45Z) - Self-Calibrating Neural Radiance Fields [68.64327335620708]
We jointly learn the geometry of the scene and the accurate camera parameters without any calibration objects.
Our camera model consists of a pinhole model, a fourth order radial distortion, and a generic noise model that can learn arbitrary non-linear camera distortions.
arXiv Detail & Related papers (2021-08-31T13:34:28Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z) - Zero-Shot Calibration of Fisheye Cameras [0.010956300138340428]
The proposed method estimates camera parameters from the horizontal and vertical field of view information of the camera without any image acquisition.
The method is particularly useful for wide-angle or fisheye cameras that have large image distortion.
arXiv Detail & Related papers (2020-11-30T08:10:24Z) - Infrastructure-based Multi-Camera Calibration using Radial Projections [117.22654577367246]
Pattern-based calibration techniques can be used to calibrate the intrinsics of the cameras individually.
Infrastucture-based calibration techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM or Structure-from-Motion.
We propose to fully calibrate a multi-camera system from scratch using an infrastructure-based approach.
arXiv Detail & Related papers (2020-07-30T09:21:04Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.