Robust, Occlusion-aware Pose Estimation for Objects Grasped by Adaptive
Hands
- URL: http://arxiv.org/abs/2003.03518v1
- Date: Sat, 7 Mar 2020 05:51:03 GMT
- Title: Robust, Occlusion-aware Pose Estimation for Objects Grasped by Adaptive
Hands
- Authors: Bowen Wen, Chaitanya Mitash, Sruthi Soorian, Andrew Kimmel, Avishai
Sintov and Kostas E. Bekris
- Abstract summary: manipulation tasks, such as within-hand manipulation, require the object's pose relative to a robot hand.
This paper presents a depth-based framework, which aims for robust pose estimation and short response times.
- Score: 16.343365158924183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many manipulation tasks, such as placement or within-hand manipulation,
require the object's pose relative to a robot hand. The task is difficult when
the hand significantly occludes the object. It is especially hard for adaptive
hands, for which it is not easy to detect the finger's configuration. In
addition, RGB-only approaches face issues with texture-less objects or when the
hand and the object look similar. This paper presents a depth-based framework,
which aims for robust pose estimation and short response times. The approach
detects the adaptive hand's state via efficient parallel search given the
highest overlap between the hand's model and the point cloud. The hand's point
cloud is pruned and robust global registration is performed to generate object
pose hypotheses, which are clustered. False hypotheses are pruned via physical
reasoning. The remaining poses' quality is evaluated given agreement with
observed data. Extensive evaluation on synthetic and real data demonstrates the
accuracy and computational efficiency of the framework when applied on
challenging, highly-occluded scenarios for different object types. An ablation
study identifies how the framework's components help in performance. This work
also provides a dataset for in-hand 6D object pose estimation. Code and dataset
are available at: https://github.com/wenbowen123/icra20-hand-object-pose
Related papers
- DVMNet: Computing Relative Pose for Unseen Objects Beyond Hypotheses [59.51874686414509]
Current approaches approximate the continuous pose representation with a large number of discrete pose hypotheses.
We present a Deep Voxel Matching Network (DVMNet) that eliminates the need for pose hypotheses and computes the relative object pose in a single pass.
Our method delivers more accurate relative pose estimates for novel objects at a lower computational cost compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-03-20T15:41:32Z) - Context-aware 6D Pose Estimation of Known Objects using RGB-D data [3.48122098223937]
6D object pose estimation has been a research topic in the field of computer vision and robotics.
We present an architecture that, unlike prior work, is context-aware.
Our experiments show an enhancement in the accuracy of about 3.2% over the LineMOD dataset.
arXiv Detail & Related papers (2022-12-11T18:01:01Z) - Interacting Hand-Object Pose Estimation via Dense Mutual Attention [97.26400229871888]
3D hand-object pose estimation is the key to the success of many computer vision applications.
We propose a novel dense mutual attention mechanism that is able to model fine-grained dependencies between the hand and the object.
Our method is able to produce physically plausible poses with high quality and real-time inference speed.
arXiv Detail & Related papers (2022-11-16T10:01:33Z) - Unseen Object 6D Pose Estimation: A Benchmark and Baselines [62.8809734237213]
We propose a new task that enables and facilitates algorithms to estimate the 6D pose estimation of novel objects during testing.
We collect a dataset with both real and synthetic images and up to 48 unseen objects in the test set.
By training an end-to-end 3D correspondences network, our method finds corresponding points between an unseen object and a partial view RGBD image accurately and efficiently.
arXiv Detail & Related papers (2022-06-23T16:29:53Z) - What's in your hands? 3D Reconstruction of Generic Objects in Hands [49.12461675219253]
Our work aims to reconstruct hand-held objects given a single RGB image.
In contrast to prior works that typically assume known 3D templates and reduce the problem to 3D pose estimation, our work reconstructs generic hand-held object without knowing their 3D templates.
arXiv Detail & Related papers (2022-04-14T17:59:02Z) - CPPF: Towards Robust Category-Level 9D Pose Estimation in the Wild [45.93626858034774]
Category-level PPF voting method to achieve accurate, robust and generalizable 9D pose estimation in the wild.
A novel coarse-to-fine voting algorithm is proposed to eliminate noisy point pair samples and generate final predictions from the population.
Our method is on par with current state of the arts with real-world training data.
arXiv Detail & Related papers (2022-03-07T01:36:22Z) - Leveraging Photometric Consistency over Time for Sparsely Supervised
Hand-Object Reconstruction [118.21363599332493]
We present a method to leverage photometric consistency across time when annotations are only available for a sparse subset of frames in a video.
Our model is trained end-to-end on color images to jointly reconstruct hands and objects in 3D by inferring their poses.
We achieve state-of-the-art results on 3D hand-object reconstruction benchmarks and demonstrate that our approach allows us to improve the pose estimation accuracy.
arXiv Detail & Related papers (2020-04-28T12:03:14Z) - Measuring Generalisation to Unseen Viewpoints, Articulations, Shapes and
Objects for 3D Hand Pose Estimation under Hand-Object Interaction [137.28465645405655]
HANDS'19 is a challenge to evaluate the abilities of current 3D hand pose estimators (HPEs) to interpolate and extrapolate the poses of a training set.
We show that the accuracy of state-of-the-art methods can drop, and that they fail mostly on poses absent from the training set.
arXiv Detail & Related papers (2020-03-30T19:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.