CosyPose: Consistent multi-view multi-object 6D pose estimation
- URL: http://arxiv.org/abs/2008.08465v1
- Date: Wed, 19 Aug 2020 14:11:56 GMT
- Title: CosyPose: Consistent multi-view multi-object 6D pose estimation
- Authors: Yann Labb\'e, Justin Carpentier, Mathieu Aubry, Josef Sivic
- Abstract summary: We present a single-view single-object 6D pose estimation method, which we use to generate 6D object pose hypotheses.
Second, we develop a robust method for matching individual 6D object pose hypotheses across different input images.
Third, we develop a method for global scene refinement given multiple object hypotheses and their correspondences across views.
- Score: 48.097599674329004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce an approach for recovering the 6D pose of multiple known objects
in a scene captured by a set of input images with unknown camera viewpoints.
First, we present a single-view single-object 6D pose estimation method, which
we use to generate 6D object pose hypotheses. Second, we develop a robust
method for matching individual 6D object pose hypotheses across different input
images in order to jointly estimate camera viewpoints and 6D poses of all
objects in a single consistent scene. Our approach explicitly handles object
symmetries, does not require depth measurements, is robust to missing or
incorrect object hypotheses, and automatically recovers the number of objects
in the scene. Third, we develop a method for global scene refinement given
multiple object hypotheses and their correspondences across views. This is
achieved by solving an object-level bundle adjustment problem that refines the
poses of cameras and objects to minimize the reprojection error in all views.
We demonstrate that the proposed method, dubbed CosyPose, outperforms current
state-of-the-art results for single-view and multi-view 6D object pose
estimation by a large margin on two challenging benchmarks: the YCB-Video and
T-LESS datasets. Code and pre-trained models are available on the project
webpage https://www.di.ens.fr/willow/research/cosypose/.
Related papers
- LocaliseBot: Multi-view 3D object localisation with differentiable
rendering for robot grasping [9.690844449175948]
We focus on object pose estimation.
Our approach relies on three pieces of information: multiple views of the object, the camera's parameters at those viewpoints, and 3D CAD models of objects.
We show that the estimated object pose results in 99.65% grasp accuracy with the ground truth grasp candidates.
arXiv Detail & Related papers (2023-11-14T14:27:53Z) - ZS6D: Zero-shot 6D Object Pose Estimation using Vision Transformers [9.899633398596672]
We introduce ZS6D, for zero-shot novel object 6D pose estimation.
Visual descriptors, extracted using pre-trained Vision Transformers (ViT), are used for matching rendered templates.
Experiments are performed on LMO, YCBV, and TLESS datasets.
arXiv Detail & Related papers (2023-09-21T11:53:01Z) - Rigidity-Aware Detection for 6D Object Pose Estimation [60.88857851869196]
Most recent 6D object pose estimation methods first use object detection to obtain 2D bounding boxes before actually regressing the pose.
We propose a rigidity-aware detection method exploiting the fact that, in 6D pose estimation, the target objects are rigid.
Key to the success of our approach is a visibility map, which we propose to build using a minimum barrier distance between every pixel in the bounding box and the box boundary.
arXiv Detail & Related papers (2023-03-22T09:02:54Z) - MV6D: Multi-View 6D Pose Estimation on RGB-D Frames Using a Deep
Point-wise Voting Network [14.754297065772676]
We present a novel multi-view 6D pose estimation method called MV6D.
We base our approach on the PVN3D network that uses a single RGB-D image to predict keypoints of the target objects.
In contrast to current multi-view pose detection networks such as CosyPose, our MV6D can learn the fusion of multiple perspectives in an end-to-end manner.
arXiv Detail & Related papers (2022-08-01T23:34:43Z) - Unseen Object 6D Pose Estimation: A Benchmark and Baselines [62.8809734237213]
We propose a new task that enables and facilitates algorithms to estimate the 6D pose estimation of novel objects during testing.
We collect a dataset with both real and synthetic images and up to 48 unseen objects in the test set.
By training an end-to-end 3D correspondences network, our method finds corresponding points between an unseen object and a partial view RGBD image accurately and efficiently.
arXiv Detail & Related papers (2022-06-23T16:29:53Z) - Coupled Iterative Refinement for 6D Multi-Object Pose Estimation [64.7198752089041]
Given a set of known 3D objects and an RGB or RGB-D input image, we detect and estimate the 6D pose of each object.
Our approach iteratively refines both pose and correspondence in a tightly coupled manner, allowing us to dynamically remove outliers to improve accuracy.
arXiv Detail & Related papers (2022-04-26T18:00:08Z) - Weakly Supervised Learning of Keypoints for 6D Object Pose Estimation [73.40404343241782]
We propose a weakly supervised 6D object pose estimation approach based on 2D keypoint detection.
Our approach achieves comparable performance with state-of-the-art fully supervised approaches.
arXiv Detail & Related papers (2022-03-07T16:23:47Z) - CenterSnap: Single-Shot Multi-Object 3D Shape Reconstruction and
Categorical 6D Pose and Size Estimation [19.284468553414918]
This paper studies the complex task of simultaneous multi-object 3D reconstruction, 6D pose and size estimation from a single-view RGB-D observation.
Existing approaches mainly follow a complex multi-stage pipeline which first localizes and detects each object instance in the image and then regresses to either their 3D meshes or 6D poses.
We present a simple one-stage approach to predict both the 3D shape and estimate the 6D pose and size jointly in a bounding-box free manner.
arXiv Detail & Related papers (2022-03-03T18:59:04Z) - Multi-View Multi-Person 3D Pose Estimation with Plane Sweep Stereo [71.59494156155309]
Existing approaches for multi-view 3D pose estimation explicitly establish cross-view correspondences to group 2D pose detections from multiple camera views.
We present our multi-view 3D pose estimation approach based on plane sweep stereo to jointly address the cross-view fusion and 3D pose reconstruction in a single shot.
arXiv Detail & Related papers (2021-04-06T03:49:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.