FS6D: Few-Shot 6D Pose Estimation of Novel Objects
- URL: http://arxiv.org/abs/2203.14628v1
- Date: Mon, 28 Mar 2022 10:31:29 GMT
- Title: FS6D: Few-Shot 6D Pose Estimation of Novel Objects
- Authors: Yisheng He, Yao Wang, Haoqiang Fan, Jian Sun, Qifeng Chen
- Abstract summary: 6D object pose estimation networks are limited in their capability to scale to large numbers of object instances.
In this work, we study a new open set problem; the few-shot 6D object poses estimation: estimating the 6D pose of an unknown object by a few support views without extra training.
- Score: 116.34922994123973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 6D object pose estimation networks are limited in their capability to scale
to large numbers of object instances due to the close-set assumption and their
reliance on high-fidelity object CAD models. In this work, we study a new open
set problem; the few-shot 6D object poses estimation: estimating the 6D pose of
an unknown object by a few support views without extra training. To tackle the
problem, we point out the importance of fully exploring the appearance and
geometric relationship between the given support views and query scene patches
and propose a dense prototypes matching framework by extracting and matching
dense RGBD prototypes with transformers. Moreover, we show that the priors from
diverse appearances and shapes are crucial to the generalization capability
under the problem setting and thus propose a large-scale RGBD photorealistic
dataset (ShapeNet6D) for network pre-training. A simple and effective online
texture blending approach is also introduced to eliminate the domain gap from
the synthesis dataset, which enriches appearance diversity at a low cost.
Finally, we discuss possible solutions to this problem and establish benchmarks
on popular datasets to facilitate future research. The project page is at
\url{https://fs6d.github.io/}.
Related papers
- Learning to Estimate 6DoF Pose from Limited Data: A Few-Shot,
Generalizable Approach using RGB Images [60.0898989456276]
We present a new framework named Cas6D for few-shot 6DoF pose estimation that is generalizable and uses only RGB images.
To address the false positives of target object detection in the extreme few-shot setting, our framework utilizes a self-supervised pre-trained ViT to learn robust feature representations.
Experimental results on the LINEMOD and GenMOP datasets demonstrate that Cas6D outperforms state-of-the-art methods by 9.2% and 3.8% accuracy (Proj-5) under the 32-shot setting.
arXiv Detail & Related papers (2023-06-13T07:45:42Z) - MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare [84.80956484848505]
MegaPose is a method to estimate the 6D pose of novel objects, that is, objects unseen during training.
We present a 6D pose refiner based on a render&compare strategy which can be applied to novel objects.
Second, we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner.
arXiv Detail & Related papers (2022-12-13T19:30:03Z) - PoET: Pose Estimation Transformer for Single-View, Multi-Object 6D Pose
Estimation [6.860183454947986]
We present a transformer-based approach that takes an RGB image as input and predicts a 6D pose for each object in the image.
Besides the image, our network does not require any additional information such as depth maps or 3D object models.
We achieve state-of-the-art results for RGB-only approaches on the challenging YCB-V dataset.
arXiv Detail & Related papers (2022-11-25T14:07:14Z) - MV6D: Multi-View 6D Pose Estimation on RGB-D Frames Using a Deep
Point-wise Voting Network [14.754297065772676]
We present a novel multi-view 6D pose estimation method called MV6D.
We base our approach on the PVN3D network that uses a single RGB-D image to predict keypoints of the target objects.
In contrast to current multi-view pose detection networks such as CosyPose, our MV6D can learn the fusion of multiple perspectives in an end-to-end manner.
arXiv Detail & Related papers (2022-08-01T23:34:43Z) - ShAPO: Implicit Representations for Multi-Object Shape, Appearance, and
Pose Optimization [40.36229450208817]
We present ShAPO, a method for joint multi-object detection, 3D textured reconstruction, 6D object pose and size estimation.
Key to ShAPO is a single-shot pipeline to regress shape, appearance and pose latent codes along with the masks of each object instance.
Our method significantly out-performs all baselines on the NOCS dataset with an 8% absolute improvement in mAP for 6D pose estimation.
arXiv Detail & Related papers (2022-07-27T17:59:31Z) - Unseen Object 6D Pose Estimation: A Benchmark and Baselines [62.8809734237213]
We propose a new task that enables and facilitates algorithms to estimate the 6D pose estimation of novel objects during testing.
We collect a dataset with both real and synthetic images and up to 48 unseen objects in the test set.
By training an end-to-end 3D correspondences network, our method finds corresponding points between an unseen object and a partial view RGBD image accurately and efficiently.
arXiv Detail & Related papers (2022-06-23T16:29:53Z) - Occlusion-Aware Self-Supervised Monocular 6D Object Pose Estimation [88.8963330073454]
We propose a novel monocular 6D pose estimation approach by means of self-supervised learning.
We leverage current trends in noisy student training and differentiable rendering to further self-supervise the model.
Our proposed self-supervision outperforms all other methods relying on synthetic data.
arXiv Detail & Related papers (2022-03-19T15:12:06Z) - CenterSnap: Single-Shot Multi-Object 3D Shape Reconstruction and
Categorical 6D Pose and Size Estimation [19.284468553414918]
This paper studies the complex task of simultaneous multi-object 3D reconstruction, 6D pose and size estimation from a single-view RGB-D observation.
Existing approaches mainly follow a complex multi-stage pipeline which first localizes and detects each object instance in the image and then regresses to either their 3D meshes or 6D poses.
We present a simple one-stage approach to predict both the 3D shape and estimate the 6D pose and size jointly in a bounding-box free manner.
arXiv Detail & Related papers (2022-03-03T18:59:04Z) - Shape Prior Deformation for Categorical 6D Object Pose and Size
Estimation [62.618227434286]
We present a novel learning approach to recover the 6D poses and sizes of unseen object instances from an RGB-D image.
We propose a deep network to reconstruct the 3D object model by explicitly modeling the deformation from a pre-learned categorical shape prior.
arXiv Detail & Related papers (2020-07-16T16:45:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.