Spatial Feature Mapping for 6DoF Object Pose Estimation
- URL: http://arxiv.org/abs/2206.01831v1
- Date: Fri, 3 Jun 2022 21:44:10 GMT
- Title: Spatial Feature Mapping for 6DoF Object Pose Estimation
- Authors: Jianhan Mei, Xudong Jiang, Henghui Ding
- Abstract summary: This work aims to estimate 6Dof (6D) object pose in background clutter.
Considering the strong occlusion and background noise, we propose to utilize the spatial structure for better tackling this challenging task.
- Score: 29.929911622127502
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work aims to estimate 6Dof (6D) object pose in background clutter.
Considering the strong occlusion and background noise, we propose to utilize
the spatial structure for better tackling this challenging task. Observing that
the 3D mesh can be naturally abstracted by a graph, we build the graph using 3D
points as vertices and mesh connections as edges. We construct the
corresponding mapping from 2D image features to 3D points for filling the graph
and fusion of the 2D and 3D features. Afterward, a Graph Convolutional Network
(GCN) is applied to help the feature exchange among objects' points in 3D
space. To address the problem of rotation symmetry ambiguity for objects, a
spherical convolution is utilized and the spherical features are combined with
the convolutional features that are mapped to the graph. Predefined 3D
keypoints are voted and the 6DoF pose is obtained via the fitting optimization.
Two scenarios of inference, one with the depth information and the other
without it are discussed. Tested on the datasets of YCB-Video and LINEMOD, the
experiments demonstrate the effectiveness of our proposed method.
Related papers
- Neural Correspondence Field for Object Pose Estimation [67.96767010122633]
We propose a method for estimating the 6DoF pose of a rigid object with an available 3D model from a single RGB image.
Unlike classical correspondence-based methods which predict 3D object coordinates at pixels of the input image, the proposed method predicts 3D object coordinates at 3D query points sampled in the camera frustum.
arXiv Detail & Related papers (2022-07-30T01:48:23Z) - Weakly Supervised Learning of Keypoints for 6D Object Pose Estimation [73.40404343241782]
We propose a weakly supervised 6D object pose estimation approach based on 2D keypoint detection.
Our approach achieves comparable performance with state-of-the-art fully supervised approaches.
arXiv Detail & Related papers (2022-03-07T16:23:47Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - Joint Deep Multi-Graph Matching and 3D Geometry Learning from
Inhomogeneous 2D Image Collections [57.60094385551773]
We propose a trainable framework for learning a deformable 3D geometry model from inhomogeneous image collections.
We in addition obtain the underlying 3D geometry of the objects depicted in the 2D images.
arXiv Detail & Related papers (2021-03-31T17:25:36Z) - 3D Point-to-Keypoint Voting Network for 6D Pose Estimation [8.801404171357916]
We propose a framework for 6D pose estimation from RGB-D data based on spatial structure characteristics of 3D keypoints.
The proposed method is verified on two benchmark datasets, LINEMOD and OCCLUSION LINEMOD.
arXiv Detail & Related papers (2020-12-22T11:43:15Z) - Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic
Segmentation [87.54570024320354]
State-of-the-art methods for large-scale driving-scene LiDAR semantic segmentation often project and process the point clouds in the 2D space.
A straightforward solution to tackle the issue of 3D-to-2D projection is to keep the 3D representation and process the points in the 3D space.
We develop a 3D cylinder partition and a 3D cylinder convolution based framework, termed as Cylinder3D, which exploits the 3D topology relations and structures of driving-scene point clouds.
arXiv Detail & Related papers (2020-08-04T13:56:19Z) - KAPLAN: A 3D Point Descriptor for Shape Completion [80.15764700137383]
KAPLAN is a 3D point descriptor that aggregates local shape information via a series of 2D convolutions.
In each of those planes, point properties like normals or point-to-plane distances are aggregated into a 2D grid and abstracted into a feature representation with an efficient 2D convolutional encoder.
Experiments on public datasets show that KAPLAN achieves state-of-the-art performance for 3D shape completion.
arXiv Detail & Related papers (2020-07-31T21:56:08Z) - 3D Shape Segmentation with Geometric Deep Learning [2.512827436728378]
We propose a neural-network based approach that produces 3D augmented views of the 3D shape to solve the whole segmentation as sub-segmentation problems.
We validate our approach using 3D shapes of publicly available datasets and of real objects that are reconstructed using photogrammetry techniques.
arXiv Detail & Related papers (2020-02-02T14:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.