Pose Proposal Critic: Robust Pose Refinement by Learning Reprojection
Errors
- URL: http://arxiv.org/abs/2005.06262v2
- Date: Thu, 14 May 2020 10:41:36 GMT
- Title: Pose Proposal Critic: Robust Pose Refinement by Learning Reprojection
Errors
- Authors: Lucas Brynte and Fredrik Kahl
- Abstract summary: We focus our attention on pose refinement, and show how to push the state-of-the-art further in the case of partial occlusions.
The proposed pose refinement method leverages on a simplified learning task, where a CNN is trained to estimate the reprojection error between an observed and a rendered image.
Current state-of-the-art results are outperformed for two out of three metrics on the Occlusion LINEMOD benchmark, while performing on-par for the final metric.
- Score: 17.918364675642998
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, considerable progress has been made for the task of rigid
object pose estimation from a single RGB-image, but achieving robustness to
partial occlusions remains a challenging problem. Pose refinement via rendering
has shown promise in order to achieve improved results, in particular, when
data is scarce.
In this paper we focus our attention on pose refinement, and show how to push
the state-of-the-art further in the case of partial occlusions. The proposed
pose refinement method leverages on a simplified learning task, where a CNN is
trained to estimate the reprojection error between an observed and a rendered
image. We experiment by training on purely synthetic data as well as a mixture
of synthetic and real data. Current state-of-the-art results are outperformed
for two out of three metrics on the Occlusion LINEMOD benchmark, while
performing on-par for the final metric.
Related papers
- PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - TexPose: Neural Texture Learning for Self-Supervised 6D Object Pose
Estimation [55.94900327396771]
We introduce neural texture learning for 6D object pose estimation from synthetic data.
We learn to predict realistic texture of objects from real image collections.
We learn pose estimation from pixel-perfect synthetic data.
arXiv Detail & Related papers (2022-12-25T13:36:32Z) - DeepRM: Deep Recurrent Matching for 6D Pose Refinement [77.34726150561087]
DeepRM is a novel recurrent network architecture for 6D pose refinement.
The architecture incorporates LSTM units to propagate information through each refinement step.
DeepRM achieves state-of-the-art performance on two widely accepted challenging datasets.
arXiv Detail & Related papers (2022-05-28T16:18:08Z) - RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects [68.85305626324694]
Ray-marching in Camera Space (RiCS) is a new method to represent the self-occlusions of foreground objects in 3D into a 2D self-occlusion map.
We show that our representation map not only allows us to enhance the image quality but also to model temporally coherent complex shadow effects.
arXiv Detail & Related papers (2022-05-14T05:35:35Z) - RNNPose: Recurrent 6-DoF Object Pose Refinement with Robust
Correspondence Field Estimation and Pose Optimization [46.144194562841435]
We propose a framework based on a recurrent neural network (RNN) for object pose refinement.
The problem is formulated as a non-linear least squares problem based on the estimated correspondence field.
The correspondence field estimation and pose refinement are conducted alternatively in each iteration to recover accurate object poses.
arXiv Detail & Related papers (2022-03-24T06:24:55Z) - Learning Skeletal Graph Neural Networks for Hard 3D Pose Estimation [14.413034040734477]
We present a novel skeletal GNN learning solution for hard poses with depth ambiguity, self-occlusion, and complex poses.
Experimental results on the Human3.6M dataset show that our solution achieves 10.3% average prediction accuracy improvement.
arXiv Detail & Related papers (2021-08-16T15:42:09Z) - Secrets of 3D Implicit Object Shape Reconstruction in the Wild [92.5554695397653]
Reconstructing high-fidelity 3D objects from sparse, partial observation is crucial for various applications in computer vision, robotics, and graphics.
Recent neural implicit modeling methods show promising results on synthetic or dense datasets.
But, they perform poorly on real-world data that is sparse and noisy.
This paper analyzes the root cause of such deficient performance of a popular neural implicit model.
arXiv Detail & Related papers (2021-01-18T03:24:48Z) - REDE: End-to-end Object 6D Pose Robust Estimation Using Differentiable
Outliers Elimination [15.736699709454857]
We propose REDE, a novel end-to-end object pose estimator using RGB-D data.
We also propose a differentiable outliers elimination method that regresses the candidate result and the confidence simultaneously.
The experimental results on three benchmark datasets show that REDE slightly outperforms the state-of-the-art approaches.
arXiv Detail & Related papers (2020-10-24T06:45:39Z) - Leveraging Photometric Consistency over Time for Sparsely Supervised
Hand-Object Reconstruction [118.21363599332493]
We present a method to leverage photometric consistency across time when annotations are only available for a sparse subset of frames in a video.
Our model is trained end-to-end on color images to jointly reconstruct hands and objects in 3D by inferring their poses.
We achieve state-of-the-art results on 3D hand-object reconstruction benchmarks and demonstrate that our approach allows us to improve the pose estimation accuracy.
arXiv Detail & Related papers (2020-04-28T12:03:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.