SA6D: Self-Adaptive Few-Shot 6D Pose Estimator for Novel and Occluded
Objects
- URL: http://arxiv.org/abs/2308.16528v1
- Date: Thu, 31 Aug 2023 08:19:26 GMT
- Title: SA6D: Self-Adaptive Few-Shot 6D Pose Estimator for Novel and Occluded
Objects
- Authors: Ning Gao, Ngo Anh Vien, Hanna Ziesche, Gerhard Neumann
- Abstract summary: We propose a few-shot pose estimation (FSPE) approach called SA6D.
It uses a self-adaptive segmentation module to identify the novel target object and construct a point cloud model of the target object.
We evaluate SA6D on real-world tabletop object datasets and demonstrate that SA6D outperforms existing FSPE methods.
- Score: 24.360831082478313
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: To enable meaningful robotic manipulation of objects in the real-world, 6D
pose estimation is one of the critical aspects. Most existing approaches have
difficulties to extend predictions to scenarios where novel object instances
are continuously introduced, especially with heavy occlusions. In this work, we
propose a few-shot pose estimation (FSPE) approach called SA6D, which uses a
self-adaptive segmentation module to identify the novel target object and
construct a point cloud model of the target object using only a small number of
cluttered reference images. Unlike existing methods, SA6D does not require
object-centric reference images or any additional object information, making it
a more generalizable and scalable solution across categories. We evaluate SA6D
on real-world tabletop object datasets and demonstrate that SA6D outperforms
existing FSPE methods, particularly in cluttered scenes with occlusions, while
requiring fewer reference images.
Related papers
- Learning to Estimate 6DoF Pose from Limited Data: A Few-Shot,
Generalizable Approach using RGB Images [60.0898989456276]
We present a new framework named Cas6D for few-shot 6DoF pose estimation that is generalizable and uses only RGB images.
To address the false positives of target object detection in the extreme few-shot setting, our framework utilizes a self-supervised pre-trained ViT to learn robust feature representations.
Experimental results on the LINEMOD and GenMOP datasets demonstrate that Cas6D outperforms state-of-the-art methods by 9.2% and 3.8% accuracy (Proj-5) under the 32-shot setting.
arXiv Detail & Related papers (2023-06-13T07:45:42Z) - Unseen Object 6D Pose Estimation: A Benchmark and Baselines [62.8809734237213]
We propose a new task that enables and facilitates algorithms to estimate the 6D pose estimation of novel objects during testing.
We collect a dataset with both real and synthetic images and up to 48 unseen objects in the test set.
By training an end-to-end 3D correspondences network, our method finds corresponding points between an unseen object and a partial view RGBD image accurately and efficiently.
arXiv Detail & Related papers (2022-06-23T16:29:53Z) - FS6D: Few-Shot 6D Pose Estimation of Novel Objects [116.34922994123973]
6D object pose estimation networks are limited in their capability to scale to large numbers of object instances.
In this work, we study a new open set problem; the few-shot 6D object poses estimation: estimating the 6D pose of an unknown object by a few support views without extra training.
arXiv Detail & Related papers (2022-03-28T10:31:29Z) - OVE6D: Object Viewpoint Encoding for Depth-based 6D Object Pose
Estimation [12.773040823634908]
We propose a universal framework, called OVE6D, for model-based 6D object pose estimation from a single depth image and a target object mask.
Our model is trained using purely synthetic data rendered from ShapeNet, and, unlike most of the existing methods, it generalizes well on new real-world objects without any fine-tuning.
We show that OVE6D outperforms some contemporary deep learning-based pose estimation methods specifically trained for individual objects or datasets with real-world training data.
arXiv Detail & Related papers (2022-03-02T12:51:33Z) - Spatial Attention Improves Iterative 6D Object Pose Estimation [52.365075652976735]
We propose a new method for 6D pose estimation refinement from RGB images.
Our main insight is that after the initial pose estimate, it is important to pay attention to distinct spatial features of the object.
We experimentally show that this approach learns to attend to salient spatial features and learns to ignore occluded parts of the object, leading to better pose estimation across datasets.
arXiv Detail & Related papers (2021-01-05T17:18:52Z) - CosyPose: Consistent multi-view multi-object 6D pose estimation [48.097599674329004]
We present a single-view single-object 6D pose estimation method, which we use to generate 6D object pose hypotheses.
Second, we develop a robust method for matching individual 6D object pose hypotheses across different input images.
Third, we develop a method for global scene refinement given multiple object hypotheses and their correspondences across views.
arXiv Detail & Related papers (2020-08-19T14:11:56Z) - Single Shot 6D Object Pose Estimation [11.37625512264302]
We introduce a novel single shot approach for 6D object pose estimation of rigid objects based on depth images.
A fully convolutional neural network is employed, where the 3D input data is spatially discretized and pose estimation is considered as a regression task.
With 65 fps on a GPU, our Object Pose Network (OP-Net) is extremely fast, is optimized end-to-end, and estimates the 6D pose of multiple objects in the image simultaneously.
arXiv Detail & Related papers (2020-04-27T11:59:11Z) - CPS++: Improving Class-level 6D Pose and Shape Estimation From Monocular
Images With Self-Supervised Learning [74.53664270194643]
Modern monocular 6D pose estimation methods can only cope with a handful of object instances.
We propose a novel method for class-level monocular 6D pose estimation, coupled with metric shape retrieval.
We experimentally demonstrate that we can retrieve precise 6D poses and metric shapes from a single RGB image.
arXiv Detail & Related papers (2020-03-12T15:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.