REDE: End-to-end Object 6D Pose Robust Estimation Using Differentiable
Outliers Elimination
- URL: http://arxiv.org/abs/2010.12807v3
- Date: Wed, 24 Feb 2021 13:38:26 GMT
- Title: REDE: End-to-end Object 6D Pose Robust Estimation Using Differentiable
Outliers Elimination
- Authors: Weitong Hua, Zhongxiang Zhou, Jun Wu, Huang Huang, Yue Wang, Rong
Xiong
- Abstract summary: We propose REDE, a novel end-to-end object pose estimator using RGB-D data.
We also propose a differentiable outliers elimination method that regresses the candidate result and the confidence simultaneously.
The experimental results on three benchmark datasets show that REDE slightly outperforms the state-of-the-art approaches.
- Score: 15.736699709454857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object 6D pose estimation is a fundamental task in many applications.
Conventional methods solve the task by detecting and matching the keypoints,
then estimating the pose. Recent efforts bringing deep learning into the
problem mainly overcome the vulnerability of conventional methods to
environmental variation due to the hand-crafted feature design. However, these
methods cannot achieve end-to-end learning and good interpretability at the
same time. In this paper, we propose REDE, a novel end-to-end object pose
estimator using RGB-D data, which utilizes network for keypoint regression, and
a differentiable geometric pose estimator for pose error back-propagation.
Besides, to achieve better robustness when outlier keypoint prediction occurs,
we further propose a differentiable outliers elimination method that regresses
the candidate result and the confidence simultaneously. Via confidence weighted
aggregation of multiple candidates, we can reduce the effect from the outliers
in the final estimation. Finally, following the conventional method, we apply a
learnable refinement process to further improve the estimation. The
experimental results on three benchmark datasets show that REDE slightly
outperforms the state-of-the-art approaches and is more robust to object
occlusion.
Related papers
- SEMPose: A Single End-to-end Network for Multi-object Pose Estimation [13.131534219937533]
SEMPose is an end-to-end multi-object pose estimation network.
It can perform inference at 32 FPS without requiring inputs other than the RGB image.
It can accurately estimate the poses of multiple objects in real time, with inference time unaffected by the number of target objects.
arXiv Detail & Related papers (2024-11-21T10:37:54Z) - For A More Comprehensive Evaluation of 6DoF Object Pose Tracking [22.696375341994035]
We contribute a unified benchmark to address the above problems.
For more accurate annotation of YCBV, we propose a multi-view multi-object global pose refinement method.
In experiments, we validate the precision and reliability of the proposed global pose refinement method with a realistic semi-synthesized dataset.
arXiv Detail & Related papers (2023-09-14T15:35:08Z) - Poseur: Direct Human Pose Regression with Transformers [119.79232258661995]
We propose a direct, regression-based approach to 2D human pose estimation from single images.
Our framework is end-to-end differentiable, and naturally learns to exploit the dependencies between keypoints.
Ours is the first regression-based approach to perform favorably compared to the best heatmap-based pose estimation methods.
arXiv Detail & Related papers (2022-01-19T04:31:57Z) - Dynamic Iterative Refinement for Efficient 3D Hand Pose Estimation [87.54604263202941]
We propose a tiny deep neural network of which partial layers are iteratively exploited for refining its previous estimations.
We employ learned gating criteria to decide whether to exit from the weight-sharing loop, allowing per-sample adaptation in our model.
Our method consistently outperforms state-of-the-art 2D/3D hand pose estimation approaches in terms of both accuracy and efficiency for widely used benchmarks.
arXiv Detail & Related papers (2021-11-11T23:31:34Z) - Probabilistic Modeling for Human Mesh Recovery [73.11532990173441]
This paper focuses on the problem of 3D human reconstruction from 2D evidence.
We recast the problem as learning a mapping from the input to a distribution of plausible 3D poses.
arXiv Detail & Related papers (2021-08-26T17:55:11Z) - TFPose: Direct Human Pose Estimation with Transformers [83.03424247905869]
We formulate the pose estimation task into a sequence prediction problem that can effectively be solved by transformers.
Our framework is simple and direct, bypassing the drawbacks of the heatmap-based pose estimation.
Experiments on the MS-COCO and MPII datasets demonstrate that our method can significantly improve the state-of-the-art of regression-based pose estimation.
arXiv Detail & Related papers (2021-03-29T04:18:54Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Deep Keypoint-Based Camera Pose Estimation with Geometric Constraints [80.60538408386016]
Estimating relative camera poses from consecutive frames is a fundamental problem in visual odometry.
We propose an end-to-end trainable framework consisting of learnable modules for detection, feature extraction, matching and outlier rejection.
arXiv Detail & Related papers (2020-07-29T21:41:31Z) - Pose Proposal Critic: Robust Pose Refinement by Learning Reprojection
Errors [17.918364675642998]
We focus our attention on pose refinement, and show how to push the state-of-the-art further in the case of partial occlusions.
The proposed pose refinement method leverages on a simplified learning task, where a CNN is trained to estimate the reprojection error between an observed and a rendered image.
Current state-of-the-art results are outperformed for two out of three metrics on the Occlusion LINEMOD benchmark, while performing on-par for the final metric.
arXiv Detail & Related papers (2020-05-13T11:46:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.