Dive Deeper into Rectifying Homography for Stereo Camera Online
Self-Calibration
- URL: http://arxiv.org/abs/2309.10314v4
- Date: Mon, 4 Mar 2024 01:43:21 GMT
- Title: Dive Deeper into Rectifying Homography for Stereo Camera Online
Self-Calibration
- Authors: Hongbo Zhao, Yikang Zhang, Qijun Chen, Rui Fan
- Abstract summary: We develop a novel online self-calibration algorithm for stereo cameras.
We introduce four new evaluation metrics to quantify the robustness and accuracy of extrinsic parameter estimation.
Our source code, demo video, and supplement are publicly available at mias.group/StereoCalibrator.
- Score: 18.089940434364234
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate estimation of stereo camera extrinsic parameters is the key to
guarantee the performance of stereo matching algorithms. In prior arts, the
online self-calibration of stereo cameras has commonly been formulated as a
specialized visual odometry problem, without taking into account the principles
of stereo rectification. In this paper, we first delve deeply into the concept
of rectifying homography, which serves as the cornerstone for the development
of our novel stereo camera online self-calibration algorithm, for cases where
only a single pair of images is available. Furthermore, we introduce a simple
yet effective solution for global optimum extrinsic parameter estimation in the
presence of stereo video sequences. Additionally, we emphasize the
impracticality of using three Euler angles and three components in the
translation vectors for performance quantification. Instead, we introduce four
new evaluation metrics to quantify the robustness and accuracy of extrinsic
parameter estimation, applicable to both single-pair and multi-pair cases.
Extensive experiments conducted across indoor and outdoor environments using
various experimental setups validate the effectiveness of our proposed
algorithm. The comprehensive evaluation results demonstrate its superior
performance in comparison to the baseline algorithm. Our source code, demo
video, and supplement are publicly available at mias.group/StereoCalibrator.
Related papers
- Single-image camera calibration with model-free distortion correction [0.0]
This paper proposes a method for estimating the complete set of calibration parameters from a single image of a planar speckle pattern covering the entire sensor.
The correspondence between image points and physical points on the calibration target is obtained using Digital Image Correlation.
At the end of the procedure, a dense and uniform model-free distortion map is obtained over the entire image.
arXiv Detail & Related papers (2024-03-02T16:51:35Z) - SDGE: Stereo Guided Depth Estimation for 360$^\circ$ Camera Sets [65.64958606221069]
Multi-camera systems are often used in autonomous driving to achieve a 360$circ$ perception.
These 360$circ$ camera sets often have limited or low-quality overlap regions, making multi-view stereo methods infeasible for the entire image.
We propose the Stereo Guided Depth Estimation (SGDE) method, which enhances depth estimation of the full image by explicitly utilizing multi-view stereo results on the overlap.
arXiv Detail & Related papers (2024-02-19T02:41:37Z) - Match and Locate: low-frequency monocular odometry based on deep feature
matching [0.65268245109828]
We introduce a novel approach for the robotic odometry which only requires a single camera.
The approach is based on matching image features between the consecutive frames of the video stream using deep feature matching models.
We evaluate the performance of the approach in the AISG-SLA Visual Localisation Challenge and find that while being computationally efficient and easy to implement our method shows competitive results.
arXiv Detail & Related papers (2023-11-16T17:32:58Z) - PS-Transformer: Learning Sparse Photometric Stereo Network using
Self-Attention Mechanism [4.822598110892846]
Existing deep calibrated photometric stereo networks aggregate observations under different lights based on pre-defined operations such as linear projection and max pooling.
To tackle this issue, this paper presents a deep sparse calibrated photometric stereo network named it PS-Transformer which leverages the learnable self-attention mechanism to properly capture the complex inter-image interactions.
arXiv Detail & Related papers (2022-11-21T11:58:25Z) - Degradation-agnostic Correspondence from Resolution-asymmetric Stereo [96.03964515969652]
We study the problem of stereo matching from a pair of images with different resolutions, e.g., those acquired with a tele-wide camera system.
We propose to impose the consistency between two views in a feature space instead of the image space, named feature-metric consistency.
We find that, although a stereo matching network trained with the photometric loss is not optimal, its feature extractor can produce degradation-agnostic and matching-specific features.
arXiv Detail & Related papers (2022-04-04T12:24:34Z) - Self-Supervised Camera Self-Calibration from Video [34.35533943247917]
We propose a learning algorithm to regress per-sequence calibration parameters using an efficient family of general camera models.
Our procedure achieves self-calibration results with sub-pixel reprojection error, outperforming other learning-based methods.
arXiv Detail & Related papers (2021-12-06T19:42:05Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Robust 360-8PA: Redesigning The Normalized 8-point Algorithm for 360-FoV
Images [53.11097060367591]
We present a novel strategy for estimating an essential matrix from 360-FoV images in spherical projection.
We show that our normalization can increase the camera pose accuracy by about 20% without significantly overhead the time.
arXiv Detail & Related papers (2021-04-22T07:23:11Z) - Infrastructure-based Multi-Camera Calibration using Radial Projections [117.22654577367246]
Pattern-based calibration techniques can be used to calibrate the intrinsics of the cameras individually.
Infrastucture-based calibration techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM or Structure-from-Motion.
We propose to fully calibrate a multi-camera system from scratch using an infrastructure-based approach.
arXiv Detail & Related papers (2020-07-30T09:21:04Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z) - Correcting Decalibration of Stereo Cameras in Self-Driving Vehicles [0.0]
We address the problem of optical decalibration in mobile stereo camera setups, especially in context of autonomous vehicles.
Our method is based on optimization of camera geometry parameters and plugs directly into the output of the stereo matching algorithm.
Our simulation confirms that the method can run constantly in parallel to stereo estimation and thus help keep the system calibrated in real time.
arXiv Detail & Related papers (2020-01-15T12:28:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.