6D Robotic Assembly Based on RGB-only Object Pose Estimation
- URL: http://arxiv.org/abs/2208.12986v1
- Date: Sat, 27 Aug 2022 11:26:24 GMT
- Title: 6D Robotic Assembly Based on RGB-only Object Pose Estimation
- Authors: Bowen Fu, Sek Kun Leong, Xiaocong Lian and Xiangyang Ji
- Abstract summary: We propose an integrated 6D robotic system to perceive, grasp, manipulate and assemble blocks with tight tolerances.
Our system is built upon a monocular 6D object pose estimation network trained solely with synthetic images leveraging physically-based rendering.
- Score: 35.74647604582182
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision-based robotic assembly is a crucial yet challenging task as the
interaction with multiple objects requires high levels of precision. In this
paper, we propose an integrated 6D robotic system to perceive, grasp,
manipulate and assemble blocks with tight tolerances. Aiming to provide an
off-the-shelf RGB-only solution, our system is built upon a monocular 6D object
pose estimation network trained solely with synthetic images leveraging
physically-based rendering. Subsequently, pose-guided 6D transformation along
with collision-free assembly is proposed to construct any designed structure
with arbitrary initial poses. Our novel 3-axis calibration operation further
enhances the precision and robustness by disentangling 6D pose estimation and
robotic assembly. Both quantitative and qualitative results demonstrate the
effectiveness of our proposed 6D robotic assembly system.
Related papers
- Advancing 6D Pose Estimation in Augmented Reality -- Overcoming Projection Ambiguity with Uncontrolled Imagery [0.0]
This study addresses the challenge of accurate 6D pose estimation in Augmented Reality (AR)
We propose a novel approach that strategically decomposes the estimation of z-axis translation and focal length.
This methodology not only streamlines the 6D pose estimation process but also significantly enhances the accuracy of 3D object overlaying in AR settings.
arXiv Detail & Related papers (2024-03-20T09:22:22Z) - Multi-Modal Dataset Acquisition for Photometrically Challenging Object [56.30027922063559]
This paper addresses the limitations of current datasets for 3D vision tasks in terms of accuracy, size, realism, and suitable imaging modalities for photometrically challenging objects.
We propose a novel annotation and acquisition pipeline that enhances existing 3D perception and 6D object pose datasets.
arXiv Detail & Related papers (2023-08-21T10:38:32Z) - SyMFM6D: Symmetry-aware Multi-directional Fusion for Multi-View 6D
Object Pose Estimation [16.460390441848464]
We present a novel symmetry-aware multi-view 6D pose estimator called SyMFM6D.
Our approach efficiently fuses the RGB-D frames from multiple perspectives in a deep multi-directional fusion network.
We show that our approach is robust towards inaccurate camera calibration and dynamic camera setups.
arXiv Detail & Related papers (2023-07-01T11:28:53Z) - T6D-Direct: Transformers for Multi-Object 6D Pose Direct Regression [40.90172673391803]
T6D-Direct is a real-time single-stage direct method with a transformer-based architecture built on DETR to perform 6D multi-object pose direct estimation.
Our method achieves the fastest inference time, and the pose estimation accuracy is comparable to state-of-the-art methods.
arXiv Detail & Related papers (2021-09-22T18:13:33Z) - SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation [98.83762558394345]
SO-Pose is a framework for regressing all 6 degrees-of-freedom (6DoF) for the object pose in a cluttered environment from a single RGB image.
We introduce a novel reasoning about self-occlusion, in order to establish a two-layer representation for 3D objects.
Cross-layer consistencies that align correspondences, self-occlusion and 6D pose, we can further improve accuracy and robustness.
arXiv Detail & Related papers (2021-08-18T19:49:29Z) - Spatial Attention Improves Iterative 6D Object Pose Estimation [52.365075652976735]
We propose a new method for 6D pose estimation refinement from RGB images.
Our main insight is that after the initial pose estimate, it is important to pay attention to distinct spatial features of the object.
We experimentally show that this approach learns to attend to salient spatial features and learns to ignore occluded parts of the object, leading to better pose estimation across datasets.
arXiv Detail & Related papers (2021-01-05T17:18:52Z) - Nothing But Geometric Constraints: A Model-Free Method for Articulated
Object Pose Estimation [89.82169646672872]
We propose an unsupervised vision-based system to estimate the joint configurations of the robot arm from a sequence of RGB or RGB-D images without knowing the model a priori.
We combine a classical geometric formulation with deep learning and extend the use of epipolar multi-rigid-body constraints to solve this task.
arXiv Detail & Related papers (2020-11-30T20:46:48Z) - MoreFusion: Multi-object Reasoning for 6D Pose Estimation from
Volumetric Fusion [19.034317851914725]
We present a system which can estimate the accurate poses of multiple known objects in contact and occlusion from real-time, embodied multi-view vision.
Our approach makes 3D object pose proposals from single RGB-D views, accumulates pose estimates and non-parametric occupancy information from multiple views as the camera moves.
We verify the accuracy and robustness of our approach experimentally on 2 object datasets: YCB-Video, and our own challenging Cluttered YCB-Video.
arXiv Detail & Related papers (2020-04-09T02:29:30Z) - CPS++: Improving Class-level 6D Pose and Shape Estimation From Monocular
Images With Self-Supervised Learning [74.53664270194643]
Modern monocular 6D pose estimation methods can only cope with a handful of object instances.
We propose a novel method for class-level monocular 6D pose estimation, coupled with metric shape retrieval.
We experimentally demonstrate that we can retrieve precise 6D poses and metric shapes from a single RGB image.
arXiv Detail & Related papers (2020-03-12T15:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.