A Deep Perceptual Measure for Lens and Camera Calibration
- URL: http://arxiv.org/abs/2208.12300v2
- Date: Wed, 26 Jul 2023 22:04:46 GMT
- Title: A Deep Perceptual Measure for Lens and Camera Calibration
- Authors: Yannick Hold-Geoffroy, Dominique Pich\'e-Meunier, Kalyan Sunkavalli,
Jean-Charles Bazin, Fran\c{c}ois Rameau and Jean-Fran\c{c}ois Lalonde
- Abstract summary: In place of the traditional multi-image calibration process, we propose to infer the camera calibration parameters directly from a single image.
We train this network using automatically generated samples from a large-scale panorama dataset.
We conduct a large-scale human perception study where we ask participants to judge the realism of 3D objects composited with correct and biased camera calibration parameters.
- Score: 35.03926427249506
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Image editing and compositing have become ubiquitous in entertainment, from
digital art to AR and VR experiences. To produce beautiful composites, the
camera needs to be geometrically calibrated, which can be tedious and requires
a physical calibration target. In place of the traditional multi-image
calibration process, we propose to infer the camera calibration parameters such
as pitch, roll, field of view, and lens distortion directly from a single image
using a deep convolutional neural network. We train this network using
automatically generated samples from a large-scale panorama dataset, yielding
competitive accuracy in terms of standard `2 error. However, we argue that
minimizing such standard error metrics might not be optimal for many
applications. In this work, we investigate human sensitivity to inaccuracies in
geometric camera calibration. To this end, we conduct a large-scale human
perception study where we ask participants to judge the realism of 3D objects
composited with correct and biased camera calibration parameters. Based on this
study, we develop a new perceptual measure for camera calibration and
demonstrate that our deep calibration network outperforms previous single-image
based calibration methods both on standard metrics as well as on this novel
perceptual measure. Finally, we demonstrate the use of our calibration network
for several applications, including virtual object insertion, image retrieval,
and compositing. A demonstration of our approach is available at
https://lvsn.github.io/deepcalib .
Related papers
- CasCalib: Cascaded Calibration for Motion Capture from Sparse Unsynchronized Cameras [18.51320244029833]
It is now possible to estimate 3D human pose from monocular images with off-the-shelf 3D pose estimators.
Many practical applications require fine-grained absolute pose information for which multi-view cues and camera calibration are necessary.
Our goal is full automation, which includes temporal synchronization, as well as intrinsic and extrinsic camera calibration.
arXiv Detail & Related papers (2024-05-10T23:02:23Z) - EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable
Rendering and Space Exploration [49.90228618894857]
We introduce a new approach to hand-eye calibration called EasyHeC, which is markerless, white-box, and delivers superior accuracy and robustness.
We propose to use two key technologies: differentiable rendering-based camera pose optimization and consistency-based joint space exploration.
Our evaluation demonstrates superior performance in synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-02T03:49:54Z) - Deep Learning for Camera Calibration and Beyond: A Survey [100.75060862015945]
Camera calibration involves estimating camera parameters to infer geometric features from captured sequences.
Recent efforts show that learning-based solutions have the potential to be used in place of the repeatability works of manual calibrations.
arXiv Detail & Related papers (2023-03-19T04:00:05Z) - Online Marker-free Extrinsic Camera Calibration using Person Keypoint
Detections [25.393382192511716]
We propose a marker-free online method for the extrinsic calibration of multiple smart edge sensors.
Our method assumes the intrinsic camera parameters to be known and requires priming with a rough initial estimate of the camera poses.
We show that the calibration with our method achieves lower reprojection errors compared to a reference calibration generated by an offline method.
arXiv Detail & Related papers (2022-09-15T15:54:21Z) - SPEC: Seeing People in the Wild with an Estimated Camera [64.85791231401684]
We introduce SPEC, the first in-the-wild 3D HPS method that estimates the perspective camera from a single image.
We train a neural network to estimate the field of view, camera pitch, and roll an input image.
We then train a novel network that rolls the camera calibration to the image features and uses these together to regress 3D body shape and pose.
arXiv Detail & Related papers (2021-10-01T19:05:18Z) - Self-Calibrating Neural Radiance Fields [68.64327335620708]
We jointly learn the geometry of the scene and the accurate camera parameters without any calibration objects.
Our camera model consists of a pinhole model, a fourth order radial distortion, and a generic noise model that can learn arbitrary non-linear camera distortions.
arXiv Detail & Related papers (2021-08-31T13:34:28Z) - Dynamic Event Camera Calibration [27.852239869987947]
We present the first dynamic event camera calibration algorithm.
It calibrates directly from events captured during relative motion between camera and calibration pattern.
As demonstrated through our results, the obtained calibration method is highly convenient and reliably calibrates from data sequences spanning less than 10 seconds.
arXiv Detail & Related papers (2021-07-14T14:52:58Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - Infrastructure-based Multi-Camera Calibration using Radial Projections [117.22654577367246]
Pattern-based calibration techniques can be used to calibrate the intrinsics of the cameras individually.
Infrastucture-based calibration techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM or Structure-from-Motion.
We propose to fully calibrate a multi-camera system from scratch using an infrastructure-based approach.
arXiv Detail & Related papers (2020-07-30T09:21:04Z) - Superaccurate Camera Calibration via Inverse Rendering [0.19336815376402716]
We propose a new method for camera calibration using the principle of inverse rendering.
Instead of relying solely on detected feature points, we use an estimate of the internal parameters and the pose of the calibration object to implicitly render a non-photorealistic equivalent of the optical features.
arXiv Detail & Related papers (2020-03-20T10:26:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.