ClearDepth: Enhanced Stereo Perception of Transparent Objects for Robotic Manipulation
- URL: http://arxiv.org/abs/2409.08926v1
- Date: Fri, 13 Sep 2024 15:44:38 GMT
- Title: ClearDepth: Enhanced Stereo Perception of Transparent Objects for Robotic Manipulation
- Authors: Kaixin Bai, Huajian Zeng, Lei Zhang, Yiwen Liu, Hongli Xu, Zhaopeng Chen, Jianwei Zhang,
- Abstract summary: We develop a vision transformer-based algorithm for stereo depth recovery of transparent objects.
Our method incorporates a parameter-aligned, domain-adaptive, and physically realistic Sim2Real simulation for efficient data generation.
Our experimental results demonstrate the model's exceptional Sim2Real generalizability in real-world scenarios.
- Score: 18.140839442955485
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transparent object depth perception poses a challenge in everyday life and logistics, primarily due to the inability of standard 3D sensors to accurately capture depth on transparent or reflective surfaces. This limitation significantly affects depth map and point cloud-reliant applications, especially in robotic manipulation. We developed a vision transformer-based algorithm for stereo depth recovery of transparent objects. This approach is complemented by an innovative feature post-fusion module, which enhances the accuracy of depth recovery by structural features in images. To address the high costs associated with dataset collection for stereo camera-based perception of transparent objects, our method incorporates a parameter-aligned, domain-adaptive, and physically realistic Sim2Real simulation for efficient data generation, accelerated by AI algorithm. Our experimental results demonstrate the model's exceptional Sim2Real generalizability in real-world scenarios, enabling precise depth mapping of transparent objects to assist in robotic manipulation. Project details are available at https://sites.google.com/view/cleardepth/ .
Related papers
- Virtually Enriched NYU Depth V2 Dataset for Monocular Depth Estimation: Do We Need Artificial Augmentation? [61.234412062595155]
We present ANYU, a new virtually augmented version of the NYU depth v2 dataset, designed for monocular depth estimation.
In contrast to the well-known approach where full 3D scenes of a virtual world are utilized to generate artificial datasets, ANYU was created by incorporating RGB-D representations of virtual reality objects.
We show that ANYU improves the monocular depth estimation performance and generalization of deep neural networks with considerably different architectures.
arXiv Detail & Related papers (2024-04-15T05:44:03Z) - SAID-NeRF: Segmentation-AIDed NeRF for Depth Completion of Transparent Objects [7.529049797077149]
Acquiring accurate depth information of transparent objects using off-the-shelf RGB-D cameras is a well-known challenge in Computer Vision and Robotics.
NeRFs are learning-free approaches and have demonstrated wide success in novel view synthesis and shape recovery.
Our proposed method-AID-NeRF shows significant performance on depth completion datasets for transparent objects and robotic grasping.
arXiv Detail & Related papers (2024-03-28T17:28:32Z) - RFTrans: Leveraging Refractive Flow of Transparent Objects for Surface
Normal Estimation and Manipulation [50.10282876199739]
This paper introduces RFTrans, an RGB-D-based method for surface normal estimation and manipulation of transparent objects.
It integrates the RFNet, which predicts refractive flow, object mask, and boundaries, followed by the F2Net, which estimates surface normal from the refractive flow.
A real-world robot grasping task witnesses an 83% success rate, proving that refractive flow can help enable direct sim-to-real transfer.
arXiv Detail & Related papers (2023-11-21T07:19:47Z) - TransTouch: Learning Transparent Objects Depth Sensing Through Sparse
Touches [23.87056600709768]
We propose a method to finetune a stereo network with sparse depth labels automatically collected using a probing system with tactile feedback.
We show that our method can significantly improve real-world depth sensing accuracy, especially for transparent objects.
arXiv Detail & Related papers (2023-09-18T01:55:17Z) - Multi-Modal Dataset Acquisition for Photometrically Challenging Object [56.30027922063559]
This paper addresses the limitations of current datasets for 3D vision tasks in terms of accuracy, size, realism, and suitable imaging modalities for photometrically challenging objects.
We propose a novel annotation and acquisition pipeline that enhances existing 3D perception and 6D object pose datasets.
arXiv Detail & Related papers (2023-08-21T10:38:32Z) - MVTrans: Multi-View Perception of Transparent Objects [29.851395075937255]
We forgo the unreliable depth map from RGB-D sensors and extend the stereo based method.
Our proposed method, MVTrans, is an end-to-end multi-view architecture with multiple perception capabilities.
We establish a novel procedural photo-realistic dataset generation pipeline and create a large-scale transparent object detection dataset.
arXiv Detail & Related papers (2023-02-22T22:45:28Z) - TODE-Trans: Transparent Object Depth Estimation with Transformer [16.928131778902564]
We present a transformer-based transparent object depth estimation approach from a single RGB-D input.
To better enhance the fine-grained features, a feature fusion module (FFM) is designed to assist coherent prediction.
arXiv Detail & Related papers (2022-09-18T03:04:01Z) - Joint Learning of Salient Object Detection, Depth Estimation and Contour
Extraction [91.43066633305662]
We propose a novel multi-task and multi-modal filtered transformer (MMFT) network for RGB-D salient object detection (SOD)
Specifically, we unify three complementary tasks: depth estimation, salient object detection and contour estimation. The multi-task mechanism promotes the model to learn the task-aware features from the auxiliary tasks.
Experiments show that it not only significantly surpasses the depth-based RGB-D SOD methods on multiple datasets, but also precisely predicts a high-quality depth map and salient contour at the same time.
arXiv Detail & Related papers (2022-03-09T17:20:18Z) - TransCG: A Large-Scale Real-World Dataset for Transparent Object Depth
Completion and Grasping [46.6058840385155]
We contribute a large-scale real-world dataset for transparent object depth completion.
Our dataset contains 57,715 RGB-D images from 130 different scenes.
We propose an end-to-end depth completion network, which takes the RGB image and the inaccurate depth map as inputs and outputs a refined depth map.
arXiv Detail & Related papers (2022-02-17T06:50:20Z) - Transferable Active Grasping and Real Embodied Dataset [48.887567134129306]
We show how to search for feasible viewpoints for grasping by the use of hand-mounted RGB-D cameras.
A practical 3-stage transferable active grasping pipeline is developed, that is adaptive to unseen clutter scenes.
In our pipeline, we propose a novel mask-guided reward to overcome the sparse reward issue in grasping and ensure category-irrelevant behavior.
arXiv Detail & Related papers (2020-04-28T08:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.