L6DNet: Light 6 DoF Network for Robust and Precise Object Pose
Estimation with Small Datasets
- URL: http://arxiv.org/abs/2002.00911v6
- Date: Sun, 29 May 2022 20:51:19 GMT
- Title: L6DNet: Light 6 DoF Network for Robust and Precise Object Pose
Estimation with Small Datasets
- Authors: Mathieu Gonzalez, Amine Kacete, Albert Murienne, Eric Marchand
- Abstract summary: We propose a novel approach to perform 6 DoF object pose estimation from a single RGB-D image.
We adopt a hybrid pipeline in two stages: data-driven and geometric.
Our approach is more robust and accurate than state-of-the-art methods.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimating the 3D pose of an object is a challenging task that can be
considered within augmented reality or robotic applications. In this paper, we
propose a novel approach to perform 6 DoF object pose estimation from a single
RGB-D image. We adopt a hybrid pipeline in two stages: data-driven and
geometric respectively. The data-driven step consists of a classification CNN
to estimate the object 2D location in the image from local patches, followed by
a regression CNN trained to predict the 3D location of a set of keypoints in
the camera coordinate system. To extract the pose information, the geometric
step consists in aligning the 3D points in the camera coordinate system with
the corresponding 3D points in world coordinate system by minimizing a
registration error, thus computing the pose. Our experiments on the standard
dataset LineMod show that our approach is more robust and accurate than
state-of-the-art methods. The approach is also validated to achieve a 6 DoF
positioning task by visual servoing.
Related papers
- 6D Object Pose Estimation from Approximate 3D Models for Orbital
Robotics [19.64111218032901]
We present a novel technique to estimate the 6D pose of objects from single images.
We employ a dense 2D-to-3D correspondence predictor that regresses 3D model coordinates for every pixel.
Our method achieves state-of-the-art performance on the SPEED+ dataset and has won the SPEC2021 post-mortem competition.
arXiv Detail & Related papers (2023-03-23T13:18:05Z) - Neural Correspondence Field for Object Pose Estimation [67.96767010122633]
We propose a method for estimating the 6DoF pose of a rigid object with an available 3D model from a single RGB image.
Unlike classical correspondence-based methods which predict 3D object coordinates at pixels of the input image, the proposed method predicts 3D object coordinates at 3D query points sampled in the camera frustum.
arXiv Detail & Related papers (2022-07-30T01:48:23Z) - Coupled Iterative Refinement for 6D Multi-Object Pose Estimation [64.7198752089041]
Given a set of known 3D objects and an RGB or RGB-D input image, we detect and estimate the 6D pose of each object.
Our approach iteratively refines both pose and correspondence in a tightly coupled manner, allowing us to dynamically remove outliers to improve accuracy.
arXiv Detail & Related papers (2022-04-26T18:00:08Z) - Weakly Supervised Learning of Keypoints for 6D Object Pose Estimation [73.40404343241782]
We propose a weakly supervised 6D object pose estimation approach based on 2D keypoint detection.
Our approach achieves comparable performance with state-of-the-art fully supervised approaches.
arXiv Detail & Related papers (2022-03-07T16:23:47Z) - Pose Estimation of Specific Rigid Objects [0.7931904787652707]
We address the problem of estimating the 6D pose of rigid objects from a single RGB or RGB-D input image.
This problem is of great importance to many application fields such as robotic manipulation, augmented reality, and autonomous driving.
arXiv Detail & Related papers (2021-12-30T14:36:47Z) - SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation [98.83762558394345]
SO-Pose is a framework for regressing all 6 degrees-of-freedom (6DoF) for the object pose in a cluttered environment from a single RGB image.
We introduce a novel reasoning about self-occlusion, in order to establish a two-layer representation for 3D objects.
Cross-layer consistencies that align correspondences, self-occlusion and 6D pose, we can further improve accuracy and robustness.
arXiv Detail & Related papers (2021-08-18T19:49:29Z) - FS-Net: Fast Shape-based Network for Category-Level 6D Object Pose
Estimation with Decoupled Rotation Mechanism [49.89268018642999]
We propose a fast shape-based network (FS-Net) with efficient category-level feature extraction for 6D pose estimation.
The proposed method achieves state-of-the-art performance in both category- and instance-level 6D object pose estimation.
arXiv Detail & Related papers (2021-03-12T03:07:24Z) - 3D Point-to-Keypoint Voting Network for 6D Pose Estimation [8.801404171357916]
We propose a framework for 6D pose estimation from RGB-D data based on spatial structure characteristics of 3D keypoints.
The proposed method is verified on two benchmark datasets, LINEMOD and OCCLUSION LINEMOD.
arXiv Detail & Related papers (2020-12-22T11:43:15Z) - Learning 2D-3D Correspondences To Solve The Blind Perspective-n-Point
Problem [98.92148855291363]
This paper proposes a deep CNN model which simultaneously solves for both 6-DoF absolute camera pose 2D--3D correspondences.
Tests on both real and simulated data have shown that our method substantially outperforms existing approaches.
arXiv Detail & Related papers (2020-03-15T04:17:30Z) - One Point, One Object: Simultaneous 3D Object Segmentation and 6-DOF Pose Estimation [0.7252027234425334]
We propose a method for simultaneous 3D object segmentation and 6-DOF pose estimation in pure 3D point clouds scenes.
The key component of our method is a multi-task CNN architecture that can simultaneously predict the 3D object segmentation and 6-DOF pose estimation in pure 3D point clouds.
For experimental evaluation, we generate expanded training data for two state-of-the-arts 3D object datasets citePLciteTLINEMOD by using Augmented Reality (AR)
arXiv Detail & Related papers (2019-12-27T13:48:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.