PFRL: Pose-Free Reinforcement Learning for 6D Pose Estimation
- URL: http://arxiv.org/abs/2102.12096v1
- Date: Wed, 24 Feb 2021 06:49:41 GMT
- Title: PFRL: Pose-Free Reinforcement Learning for 6D Pose Estimation
- Authors: Jianzhun Shao, Yuhang Jiang, Gu Wang, Zhigang Li, Xiangyang Ji
- Abstract summary: We formulate the 6D pose refinement as a Markov Decision Process.
We impose on the reinforcement learning approach with only 2D image annotations as weakly-supervised 6D pose information.
Experiments on LINEMOD and T-LESS datasets demonstrate that our Pose-Free approach is able to achieve state-of-the-art performance.
- Score: 38.07629829885753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 6D pose estimation from a single RGB image is a challenging and vital task in
computer vision. The current mainstream deep model methods resort to 2D images
annotated with real-world ground-truth 6D object poses, whose collection is
fairly cumbersome and expensive, even unavailable in many cases. In this work,
to get rid of the burden of 6D annotations, we formulate the 6D pose refinement
as a Markov Decision Process and impose on the reinforcement learning approach
with only 2D image annotations as weakly-supervised 6D pose information, via a
delicate reward definition and a composite reinforced optimization method for
efficient and effective policy training. Experiments on LINEMOD and T-LESS
datasets demonstrate that our Pose-Free approach is able to achieve
state-of-the-art performance compared with the methods without using real-world
ground-truth 6D pose labels.
Related papers
- Pseudo Flow Consistency for Self-Supervised 6D Object Pose Estimation [14.469317161361202]
We propose a 6D object pose estimation method that can be trained with pure RGB images without any auxiliary information.
We evaluate our method on three challenging datasets and demonstrate that it outperforms state-of-the-art self-supervised methods significantly.
arXiv Detail & Related papers (2023-08-19T13:52:18Z) - Imitrob: Imitation Learning Dataset for Training and Evaluating 6D
Object Pose Estimators [20.611000416051546]
This paper introduces a dataset for training and evaluating methods for 6D pose estimation of hand-held tools in task demonstrations captured by a standard RGB camera.
The dataset contains image sequences of nine different tools and twelve manipulation tasks with two camera viewpoints, four human subjects, and left/right hand.
arXiv Detail & Related papers (2022-09-16T14:43:46Z) - Knowledge Distillation for 6D Pose Estimation by Keypoint Distribution
Alignment [77.70208382044355]
We introduce the first knowledge distillation method for 6D pose estimation.
We observe the compact student network to struggle predicting precise 2D keypoint locations.
Our experiments on several benchmarks show that our distillation method yields state-of-the-art results.
arXiv Detail & Related papers (2022-05-30T10:17:17Z) - Coupled Iterative Refinement for 6D Multi-Object Pose Estimation [64.7198752089041]
Given a set of known 3D objects and an RGB or RGB-D input image, we detect and estimate the 6D pose of each object.
Our approach iteratively refines both pose and correspondence in a tightly coupled manner, allowing us to dynamically remove outliers to improve accuracy.
arXiv Detail & Related papers (2022-04-26T18:00:08Z) - OVE6D: Object Viewpoint Encoding for Depth-based 6D Object Pose
Estimation [12.773040823634908]
We propose a universal framework, called OVE6D, for model-based 6D object pose estimation from a single depth image and a target object mask.
Our model is trained using purely synthetic data rendered from ShapeNet, and, unlike most of the existing methods, it generalizes well on new real-world objects without any fine-tuning.
We show that OVE6D outperforms some contemporary deep learning-based pose estimation methods specifically trained for individual objects or datasets with real-world training data.
arXiv Detail & Related papers (2022-03-02T12:51:33Z) - SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation [98.83762558394345]
SO-Pose is a framework for regressing all 6 degrees-of-freedom (6DoF) for the object pose in a cluttered environment from a single RGB image.
We introduce a novel reasoning about self-occlusion, in order to establish a two-layer representation for 3D objects.
Cross-layer consistencies that align correspondences, self-occlusion and 6D pose, we can further improve accuracy and robustness.
arXiv Detail & Related papers (2021-08-18T19:49:29Z) - Single Shot 6D Object Pose Estimation [11.37625512264302]
We introduce a novel single shot approach for 6D object pose estimation of rigid objects based on depth images.
A fully convolutional neural network is employed, where the 3D input data is spatially discretized and pose estimation is considered as a regression task.
With 65 fps on a GPU, our Object Pose Network (OP-Net) is extremely fast, is optimized end-to-end, and estimates the 6D pose of multiple objects in the image simultaneously.
arXiv Detail & Related papers (2020-04-27T11:59:11Z) - Self6D: Self-Supervised Monocular 6D Object Pose Estimation [114.18496727590481]
We propose the idea of monocular 6D pose estimation by means of self-supervised learning.
We leverage recent advances in neural rendering to further self-supervise the model on unannotated real RGB-D data.
arXiv Detail & Related papers (2020-04-14T13:16:36Z) - CPS++: Improving Class-level 6D Pose and Shape Estimation From Monocular
Images With Self-Supervised Learning [74.53664270194643]
Modern monocular 6D pose estimation methods can only cope with a handful of object instances.
We propose a novel method for class-level monocular 6D pose estimation, coupled with metric shape retrieval.
We experimentally demonstrate that we can retrieve precise 6D poses and metric shapes from a single RGB image.
arXiv Detail & Related papers (2020-03-12T15:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.