Feature Extraction Reimagined: Achieving Superior Accuracy in Camera Calibration
- URL: http://arxiv.org/abs/2410.13371v2
- Date: Fri, 15 Nov 2024 03:07:13 GMT
- Title: Feature Extraction Reimagined: Achieving Superior Accuracy in Camera Calibration
- Authors: Zezhun Shi,
- Abstract summary: This paper focuses on improving the accuracy of feature extraction, which is a key step in calibration.
We introduce a novel dynamic calibration target that synthesizes multiple checkerboard patterns of different angle around pattern center.
We also propose a novel cost function of feature refinement that accounts for defocus effect, offering a more physically realistic model.
- Score: 0.0
- License:
- Abstract: Camera calibration is crucial for 3D vision applications. This paper focuses on improving the accuracy of feature extraction, which is a key step in calibration. We address the aliasing problem of star-shaped pattern by introducing a novel dynamic calibration target that synthesizes multiple checkerboard patterns of different angle around pattern center, which significantly improves feature refinement accuracy. Additionally, we propose a novel cost function of feature refinement that accounts for defocus effect, offering a more physically realistic model compared to existing symmetry based method, experiment on a large dataset demonstrate significant improvements in calibration accuracy with reduced computation time. Our code is available from https://github.com/spdfghi/Feature-Extraction-Reimagined-Achieving-Superior-Accuracy-in-Camera-Calib ration.git.
Related papers
- EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable
Rendering and Space Exploration [49.90228618894857]
We introduce a new approach to hand-eye calibration called EasyHeC, which is markerless, white-box, and delivers superior accuracy and robustness.
We propose to use two key technologies: differentiable rendering-based camera pose optimization and consistency-based joint space exploration.
Our evaluation demonstrates superior performance in synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-02T03:49:54Z) - Neural Lens Modeling [50.57409162437732]
NeuroLens is a neural lens model for distortion and vignetting that can be used for point projection and ray casting.
It can be used to perform pre-capture calibration using classical calibration targets, and can later be used to perform calibration or refinement during 3D reconstruction.
The model generalizes across many lens types and is trivial to integrate into existing 3D reconstruction and rendering systems.
arXiv Detail & Related papers (2023-04-10T20:09:17Z) - TartanCalib: Iterative Wide-Angle Lens Calibration using Adaptive
SubPixel Refinement of AprilTags [23.568127229446965]
Calibrating wide-angle lenses with current state-of-the-art techniques yields poor results due to extreme distortion at the edge.
We present our methodology for accurate wide-angle calibration.
arXiv Detail & Related papers (2022-10-05T18:57:07Z) - A Model for Multi-View Residual Covariances based on Perspective
Deformation [88.21738020902411]
We derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups.
We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment.
arXiv Detail & Related papers (2022-02-01T21:21:56Z) - Pixel-Perfect Structure-from-Motion with Featuremetric Refinement [96.73365545609191]
We refine two key steps of structure-from-motion by a direct alignment of low-level image information from multiple views.
This significantly improves the accuracy of camera poses and scene geometry for a wide range of keypoint detectors.
Our system easily scales to large image collections, enabling pixel-perfect crowd-sourced localization at scale.
arXiv Detail & Related papers (2021-08-18T17:58:55Z) - Dynamic Event Camera Calibration [27.852239869987947]
We present the first dynamic event camera calibration algorithm.
It calibrates directly from events captured during relative motion between camera and calibration pattern.
As demonstrated through our results, the obtained calibration method is highly convenient and reliably calibrates from data sequences spanning less than 10 seconds.
arXiv Detail & Related papers (2021-07-14T14:52:58Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - ACSC: Automatic Calibration for Non-repetitive Scanning Solid-State
LiDAR and Camera Systems [11.787271829250805]
Solid-State LiDAR (SSL) enables low-cost and efficient obtainment of 3D point clouds from the environment.
We propose a fully automatic calibration method for the non-repetitive scanning SSL and camera systems.
We evaluate the proposed method on different types of LiDAR and camera sensor combinations in real conditions.
arXiv Detail & Related papers (2020-11-17T09:11:28Z) - Superaccurate Camera Calibration via Inverse Rendering [0.19336815376402716]
We propose a new method for camera calibration using the principle of inverse rendering.
Instead of relying solely on detected feature points, we use an estimate of the internal parameters and the pose of the calibration object to implicitly render a non-photorealistic equivalent of the optical features.
arXiv Detail & Related papers (2020-03-20T10:26:16Z) - Calibrating Deep Neural Networks using Focal Loss [77.92765139898906]
Miscalibration is a mismatch between a model's confidence and its correctness.
We show that focal loss allows us to learn models that are already very well calibrated.
We show that our approach achieves state-of-the-art calibration without compromising on accuracy in almost all cases.
arXiv Detail & Related papers (2020-02-21T17:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.