Rectifying homographies for stereo vision: analytical solution for
minimal distortion
- URL: http://arxiv.org/abs/2203.00123v1
- Date: Mon, 28 Feb 2022 22:35:47 GMT
- Title: Rectifying homographies for stereo vision: analytical solution for
minimal distortion
- Authors: Pasquale Lafiosca and Marta Ceccaroni
- Abstract summary: Rectification is used to simplify the subsequent stereo correspondence problem.
This work proposes a closed-form solution for the rectifying homographies that minimise perspective distortion.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Stereo rectification is the determination of two image transformations (or
homographies) that map corresponding points on the two images, projections of
the same point in the 3D space, onto the same horizontal line in the
transformed images. Rectification is used to simplify the subsequent stereo
correspondence problem and speeding up the matching process. Rectifying
transformations, in general, introduce perspective distortion on the obtained
images, which shall be minimised to improve the accuracy of the following
algorithm dealing with the stereo correspondence problem. The search for the
optimal transformations is usually carried out relying on numerical
optimisation. This work proposes a closed-form solution for the rectifying
homographies that minimise perspective distortion. The experimental comparison
confirms its capability to solve the convergence issues of the previous
formulation. Its Python implementation is provided.
Related papers
- Exploring Invariance in Images through One-way Wave Equations [96.90549064390608]
In this paper, we empirically reveal an invariance over images-images share a set of one-way wave equations with latent speeds.
We demonstrate it using an intuitive encoder-decoder framework where each image is encoded into its corresponding initial condition.
arXiv Detail & Related papers (2023-10-19T17:59:37Z) - Explicit Correspondence Matching for Generalizable Neural Radiance
Fields [49.49773108695526]
We present a new NeRF method that is able to generalize to new unseen scenarios and perform novel view synthesis with as few as two source views.
The explicit correspondence matching is quantified with the cosine similarity between image features sampled at the 2D projections of a 3D point on different views.
Our method achieves state-of-the-art results on different evaluation settings, with the experiments showing a strong correlation between our learned cosine feature similarity and volume density.
arXiv Detail & Related papers (2023-04-24T17:46:01Z) - Single-View View Synthesis with Self-Rectified Pseudo-Stereo [49.946151180828465]
We leverage the reliable and explicit stereo prior to generate a pseudo-stereo viewpoint.
We propose a self-rectified stereo synthesis to amend erroneous regions in an identify-rectify manner.
Our method outperforms state-of-the-art single-view view synthesis methods and stereo synthesis methods.
arXiv Detail & Related papers (2023-04-19T09:36:13Z) - Spherical Transformer [17.403133838762447]
convolutional neural networks for 360images can induce sub-optimal performance due to distortions entailed by a planar projection.
We leverage the transformer architecture to solve image classification problems for 360images.
Our method does not require the erroneous planar projection process by sampling pixels from the sphere surface.
arXiv Detail & Related papers (2022-02-10T10:24:24Z) - Pseudocylindrical Convolutions for Learned Omnidirectional Image
Compression [42.15877732557837]
We make one of the first attempts to learn deep neural networks for omnidirectional image compression.
Under reasonable constraints on the parametric representation, the pseudocylindrical convolution can be efficiently implemented by standard convolution.
Experimental results show that our method consistently achieves better rate-distortion performance than competing methods.
arXiv Detail & Related papers (2021-12-25T12:18:32Z) - Differentiable Rendering with Perturbed Optimizers [85.66675707599782]
Reasoning about 3D scenes from their 2D image projections is one of the core problems in computer vision.
Our work highlights the link between some well-known differentiable formulations and randomly smoothed renderings.
We apply our method to 3D scene reconstruction and demonstrate its advantages on the tasks of 6D pose estimation and 3D mesh reconstruction.
arXiv Detail & Related papers (2021-10-18T08:56:23Z) - Image Matching with Scale Adjustment [57.18604132027697]
We show how to represent and extract interest points at variable scales.
We devise a method allowing the comparison of two images at two different resolutions.
arXiv Detail & Related papers (2020-12-10T11:03:25Z) - Revisiting Stereo Depth Estimation From a Sequence-to-Sequence
Perspective with Transformers [11.669086751865091]
Stereo depth estimation relies on optimal correspondence matching between pixels on epipolar lines in the left and right images to infer depth.
In this work, we revisit the problem from a sequence-to-sequence correspondence perspective to replace cost volume construction with dense pixel matching using position information and attention.
We report promising results on both synthetic and real-world datasets and demonstrate that STTR generalizes across different domains, even without fine-tuning.
arXiv Detail & Related papers (2020-11-05T15:35:46Z) - A Deep Ordinal Distortion Estimation Approach for Distortion Rectification [62.72089758481803]
We propose a novel distortion rectification approach that can obtain more accurate parameters with higher efficiency.
We design a local-global associated estimation network that learns the ordinal distortion to approximate the realistic distortion distribution.
Considering the redundancy of distortion information, our approach only uses a part of distorted image for the ordinal distortion estimation.
arXiv Detail & Related papers (2020-07-21T10:03:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.