End-to-end learning of keypoint detection and matching for relative pose
estimation
- URL: http://arxiv.org/abs/2104.01085v1
- Date: Fri, 2 Apr 2021 15:16:17 GMT
- Title: End-to-end learning of keypoint detection and matching for relative pose
estimation
- Authors: Antoine Fond, Luca Del Pero, Nikola Sivacki, Marco Paladini
- Abstract summary: We propose a new method for estimating the relative pose between two images.
We jointly learn keypoint detection, description extraction, matching and robust pose estimation.
We demonstrate our method for the task of visual localization of a query image within a database of images with known pose.
- Score: 1.8352113484137624
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a new method for estimating the relative pose between two images,
where we jointly learn keypoint detection, description extraction, matching and
robust pose estimation. While our architecture follows the traditional pipeline
for pose estimation from geometric computer vision, all steps are learnt in an
end-to-end fashion, including feature matching. We demonstrate our method for
the task of visual localization of a query image within a database of images
with known pose. Pairwise pose estimation has many practical applications for
robotic mapping, navigation, and AR. For example, the display of persistent AR
objects in the scene relies on a precise camera localization to make the
digital models appear anchored to the physical environment. We train our
pipeline end-to-end specifically for the problem of visual localization. We
evaluate our proposed approach on localization accuracy, robustness and runtime
speed. Our method achieves state of the art localization accuracy on the 7
Scenes dataset.
Related papers
- KRONC: Keypoint-based Robust Camera Optimization for 3D Car Reconstruction [58.04846444985808]
This paper introduces KRONC, a novel approach aimed at inferring view poses by leveraging prior knowledge about the object to reconstruct and its representation through semantic keypoints.
With a focus on vehicle scenes, KRONC is able to estimate the position of the views as a solution to a light optimization problem targeting the convergence of keypoints' back-projections to a singular point.
arXiv Detail & Related papers (2024-09-09T08:08:05Z) - Mismatched: Evaluating the Limits of Image Matching Approaches and Benchmarks [9.388897214344572]
Three-dimensional (3D) reconstruction from two-dimensional images is an active research field in computer vision.
Traditionally, parametric techniques have been employed for this task.
Recent advancements have seen a shift towards learning-based methods.
arXiv Detail & Related papers (2024-08-29T11:16:34Z) - SRPose: Two-view Relative Pose Estimation with Sparse Keypoints [51.49105161103385]
SRPose is a sparse keypoint-based framework for two-view relative pose estimation in camera-to-world and object-to-camera scenarios.
It achieves competitive or superior performance compared to state-of-the-art methods in terms of accuracy and speed.
It is robust to different image sizes and camera intrinsics, and can be deployed with low computing resources.
arXiv Detail & Related papers (2024-07-11T05:46:35Z) - LocaliseBot: Multi-view 3D object localisation with differentiable
rendering for robot grasping [9.690844449175948]
We focus on object pose estimation.
Our approach relies on three pieces of information: multiple views of the object, the camera's parameters at those viewpoints, and 3D CAD models of objects.
We show that the estimated object pose results in 99.65% grasp accuracy with the ground truth grasp candidates.
arXiv Detail & Related papers (2023-11-14T14:27:53Z) - Learning-based Relational Object Matching Across Views [63.63338392484501]
We propose a learning-based approach which combines local keypoints with novel object-level features for matching object detections between RGB images.
We train our object-level matching features based on appearance and inter-frame and cross-frame spatial relations between objects in an associative graph neural network.
arXiv Detail & Related papers (2023-05-03T19:36:51Z) - ImPosIng: Implicit Pose Encoding for Efficient Camera Pose Estimation [2.6808541153140077]
Implicit Pose.
(ImPosing) embeds images and camera poses into a common latent representation with 2 separate neural networks.
By evaluating candidates through the latent space in a hierarchical manner, the camera position and orientation are not directly regressed but refined.
arXiv Detail & Related papers (2022-05-05T13:33:25Z) - Combining Local and Global Pose Estimation for Precise Tracking of
Similar Objects [2.861848675707602]
We present a multi-object 6D detection and tracking pipeline for potentially similar and non-textured objects.
A new network architecture, trained solely with synthetic images, allows simultaneous pose estimation of multiple objects.
We show how the system can be used in a real AR assistance application within the field of construction.
arXiv Detail & Related papers (2022-01-31T14:36:57Z) - Learning Dynamics via Graph Neural Networks for Human Pose Estimation
and Tracking [98.91894395941766]
We propose a novel online approach to learning the pose dynamics, which are independent of pose detections in current fame.
Specifically, we derive this prediction of dynamics through a graph neural network(GNN) that explicitly accounts for both spatial-temporal and visual information.
Experiments on PoseTrack 2017 and PoseTrack 2018 datasets demonstrate that the proposed method achieves results superior to the state of the art on both human pose estimation and tracking tasks.
arXiv Detail & Related papers (2021-06-07T16:36:50Z) - Leveraging Photometric Consistency over Time for Sparsely Supervised
Hand-Object Reconstruction [118.21363599332493]
We present a method to leverage photometric consistency across time when annotations are only available for a sparse subset of frames in a video.
Our model is trained end-to-end on color images to jointly reconstruct hands and objects in 3D by inferring their poses.
We achieve state-of-the-art results on 3D hand-object reconstruction benchmarks and demonstrate that our approach allows us to improve the pose estimation accuracy.
arXiv Detail & Related papers (2020-04-28T12:03:14Z) - GeoGraph: Learning graph-based multi-view object detection with
geometric cues end-to-end [10.349116753411742]
We propose an end-to-end learnable approach that detects static urban objects from multiple views.
Our method relies on a Graph Neural Network (GNN) to, detect all objects and output their geographic positions.
Our GNN simultaneously models relative pose and image evidence, and is further able to deal with an arbitrary number of input views.
arXiv Detail & Related papers (2020-03-23T09:40:35Z) - Geometrically Mappable Image Features [85.81073893916414]
Vision-based localization of an agent in a map is an important problem in robotics and computer vision.
We propose a method that learns image features targeted for image-retrieval-based localization.
arXiv Detail & Related papers (2020-03-21T15:36:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.