Calculating Pose with Vanishing Points of Visual-Sphere Perspective
Model
- URL: http://arxiv.org/abs/2004.08933v4
- Date: Sun, 14 May 2023 03:23:47 GMT
- Title: Calculating Pose with Vanishing Points of Visual-Sphere Perspective
Model
- Authors: Jakub Maksymilian Fober
- Abstract summary: The goal of the proposed method is to directly obtain a pose matrix of a known rectangular target, without estimation.
This method is specifically tailored for real-time, extreme imaging setups exceeding 180deg field of view, such as a fish-eye camera view.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The goal of the proposed method is to directly obtain a pose matrix of a
known rectangular target, without estimation, using geometric techniques. This
method is specifically tailored for real-time, extreme imaging setups exceeding
180{\deg} field of view, such as a fish-eye camera view. The introduced
algorithm employs geometric algebra to determine the pose for a pair of
coplanar parallel lines (ideally a tangent pair as in a rectangle). This is
achieved by computing vanishing points on a visual unit sphere, which
correspond to pose matrix vectors. The algorithm can determine pose for an
extremely distorted view source without prior rectification, owing to a
visual-sphere perspective model mapping of view coordinates. Mapping can be
performed using either a perspective map lookup or a parametric universal
perspective distortion model, which is also presented in this paper. The
outcome is a robust pose matrix computation that can be executed on an embedded
system using a microcontroller, offering high accuracy and low latency. This
method can be further extended to a cubic target setup for comprehensive camera
calibration. It may also prove valuable in other applications requiring low
latency and extreme viewing angles.
Related papers
- Full-range Head Pose Geometric Data Augmentations [2.8358100463599722]
Many head pose estimation (HPE) methods promise the ability to create full-range datasets.
These methods are only accurate within a range of head angles; exceeding this specific range led to significant inaccuracies.
Here, we present methods that accurately infer the correct coordinate system and Euler angles in the correct axis-sequence.
arXiv Detail & Related papers (2024-08-02T20:41:18Z) - Estimating Depth of Monocular Panoramic Image with Teacher-Student Model Fusing Equirectangular and Spherical Representations [3.8240176158734194]
We propose a method of estimating the depth of monocular panoramic image with a teacher-student model fusing equirectangular and spherical representations.
In experiments, the proposed method is tested on several well-known 360 monocular depth estimation benchmark datasets.
arXiv Detail & Related papers (2024-05-27T06:11:16Z) - DVMNet: Computing Relative Pose for Unseen Objects Beyond Hypotheses [59.51874686414509]
Current approaches approximate the continuous pose representation with a large number of discrete pose hypotheses.
We present a Deep Voxel Matching Network (DVMNet) that eliminates the need for pose hypotheses and computes the relative object pose in a single pass.
Our method delivers more accurate relative pose estimates for novel objects at a lower computational cost compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-03-20T15:41:32Z) - Point Anywhere: Directed Object Estimation from Omnidirectional Images [10.152838128195468]
We propose a method using an omnidirectional camera to eliminate the user/object position constraint and the left/right constraint of the pointing arm.
The proposed method enables highly accurate estimation by repeatedly extracting regions of interest from the equirectangular image.
arXiv Detail & Related papers (2023-08-02T08:32:43Z) - Object-Based Visual Camera Pose Estimation From Ellipsoidal Model and
3D-Aware Ellipse Prediction [2.016317500787292]
We propose a method for initial camera pose estimation from just a single image.
It exploits the ability of deep learning techniques to reliably detect objects regardless of viewing conditions.
Experiments prove that the accuracy of the computed pose significantly increases thanks to our method.
arXiv Detail & Related papers (2022-03-09T10:00:52Z) - Category-Level Metric Scale Object Shape and Pose Estimation [73.92460712829188]
We propose a framework that jointly estimates a metric scale shape and pose from a single RGB image.
We validated our method on both synthetic and real-world datasets to evaluate category-level object pose and shape.
arXiv Detail & Related papers (2021-09-01T12:16:46Z) - 3D-Aware Ellipse Prediction for Object-Based Camera Pose Estimation [3.103806775802078]
We propose a method for coarse camera pose computation which is robust to viewing conditions.
It exploits the ability of deep learning techniques to reliably detect objects regardless of viewing conditions.
arXiv Detail & Related papers (2021-05-24T18:40:18Z) - Sparse Pose Trajectory Completion [87.31270669154452]
We propose a method to learn, even using a dataset where objects appear only in sparsely sampled views.
This is achieved with a cross-modal pose trajectory transfer mechanism.
Our method is evaluated on the Pix3D and ShapeNet datasets.
arXiv Detail & Related papers (2021-05-01T00:07:21Z) - Nothing But Geometric Constraints: A Model-Free Method for Articulated
Object Pose Estimation [89.82169646672872]
We propose an unsupervised vision-based system to estimate the joint configurations of the robot arm from a sequence of RGB or RGB-D images without knowing the model a priori.
We combine a classical geometric formulation with deep learning and extend the use of epipolar multi-rigid-body constraints to solve this task.
arXiv Detail & Related papers (2020-11-30T20:46:48Z) - Category Level Object Pose Estimation via Neural Analysis-by-Synthesis [64.14028598360741]
In this paper we combine a gradient-based fitting procedure with a parametric neural image synthesis module.
The image synthesis network is designed to efficiently span the pose configuration space.
We experimentally show that the method can recover orientation of objects with high accuracy from 2D images alone.
arXiv Detail & Related papers (2020-08-18T20:30:47Z) - Object-Centric Multi-View Aggregation [86.94544275235454]
We present an approach for aggregating a sparse set of views of an object in order to compute a semi-implicit 3D representation in the form of a volumetric feature grid.
Key to our approach is an object-centric canonical 3D coordinate system into which views can be lifted, without explicit camera pose estimation.
We show that computing a symmetry-aware mapping from pixels to the canonical coordinate system allows us to better propagate information to unseen regions.
arXiv Detail & Related papers (2020-07-20T17:38:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.