3D-Aware Hypothesis & Verification for Generalizable Relative Object
Pose Estimation
- URL: http://arxiv.org/abs/2310.03534v1
- Date: Thu, 5 Oct 2023 13:34:07 GMT
- Title: 3D-Aware Hypothesis & Verification for Generalizable Relative Object
Pose Estimation
- Authors: Chen Zhao, Tong Zhang, Mathieu Salzmann
- Abstract summary: We present a new hypothesis-and-verification framework to tackle the problem of generalizable object pose estimation.
To measure reliability, we introduce a 3D-aware verification that explicitly applies 3D transformations to the 3D object representations learned from the two input images.
- Score: 69.73691477825079
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prior methods that tackle the problem of generalizable object pose estimation
highly rely on having dense views of the unseen object. By contrast, we address
the scenario where only a single reference view of the object is available. Our
goal then is to estimate the relative object pose between this reference view
and a query image that depicts the object in a different pose. In this
scenario, robust generalization is imperative due to the presence of unseen
objects during testing and the large-scale object pose variation between the
reference and the query. To this end, we present a new
hypothesis-and-verification framework, in which we generate and evaluate
multiple pose hypotheses, ultimately selecting the most reliable one as the
relative object pose. To measure reliability, we introduce a 3D-aware
verification that explicitly applies 3D transformations to the 3D object
representations learned from the two input images. Our comprehensive
experiments on the Objaverse, LINEMOD, and CO3D datasets evidence the superior
accuracy of our approach in relative pose estimation and its robustness in
large-scale pose variations, when dealing with unseen objects.
Related papers
- DVMNet: Computing Relative Pose for Unseen Objects Beyond Hypotheses [59.51874686414509]
Current approaches approximate the continuous pose representation with a large number of discrete pose hypotheses.
We present a Deep Voxel Matching Network (DVMNet) that eliminates the need for pose hypotheses and computes the relative object pose in a single pass.
Our method delivers more accurate relative pose estimates for novel objects at a lower computational cost compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-03-20T15:41:32Z) - Extreme Two-View Geometry From Object Poses with Diffusion Models [21.16779160086591]
We harness the power of object priors to accurately determine two-view geometry in the face of extreme viewpoint changes.
In experiments, our method has demonstrated extraordinary robustness and resilience to large viewpoint changes.
arXiv Detail & Related papers (2024-02-05T08:18:47Z) - LocaliseBot: Multi-view 3D object localisation with differentiable
rendering for robot grasping [9.690844449175948]
We focus on object pose estimation.
Our approach relies on three pieces of information: multiple views of the object, the camera's parameters at those viewpoints, and 3D CAD models of objects.
We show that the estimated object pose results in 99.65% grasp accuracy with the ground truth grasp candidates.
arXiv Detail & Related papers (2023-11-14T14:27:53Z) - ShapeShift: Superquadric-based Object Pose Estimation for Robotic
Grasping [85.38689479346276]
Current techniques heavily rely on a reference 3D object, limiting their generalizability and making it expensive to expand to new object categories.
This paper proposes ShapeShift, a superquadric-based framework for object pose estimation that predicts the object's pose relative to a primitive shape which is fitted to the object.
arXiv Detail & Related papers (2023-04-10T20:55:41Z) - What's in your hands? 3D Reconstruction of Generic Objects in Hands [49.12461675219253]
Our work aims to reconstruct hand-held objects given a single RGB image.
In contrast to prior works that typically assume known 3D templates and reduce the problem to 3D pose estimation, our work reconstructs generic hand-held object without knowing their 3D templates.
arXiv Detail & Related papers (2022-04-14T17:59:02Z) - CosyPose: Consistent multi-view multi-object 6D pose estimation [48.097599674329004]
We present a single-view single-object 6D pose estimation method, which we use to generate 6D object pose hypotheses.
Second, we develop a robust method for matching individual 6D object pose hypotheses across different input images.
Third, we develop a method for global scene refinement given multiple object hypotheses and their correspondences across views.
arXiv Detail & Related papers (2020-08-19T14:11:56Z) - Single View Metrology in the Wild [94.7005246862618]
We present a novel approach to single view metrology that can recover the absolute scale of a scene represented by 3D heights of objects or camera height above the ground.
Our method relies on data-driven priors learned by a deep network specifically designed to imbibe weakly supervised constraints from the interplay of the unknown camera with 3D entities such as object heights.
We demonstrate state-of-the-art qualitative and quantitative results on several datasets as well as applications including virtual object insertion.
arXiv Detail & Related papers (2020-07-18T22:31:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.