TransTouch: Learning Transparent Objects Depth Sensing Through Sparse
Touches
- URL: http://arxiv.org/abs/2309.09427v1
- Date: Mon, 18 Sep 2023 01:55:17 GMT
- Title: TransTouch: Learning Transparent Objects Depth Sensing Through Sparse
Touches
- Authors: Liuyu Bian, Pengyang Shi, Weihang Chen, Jing Xu, Li Yi, Rui Chen
- Abstract summary: We propose a method to finetune a stereo network with sparse depth labels automatically collected using a probing system with tactile feedback.
We show that our method can significantly improve real-world depth sensing accuracy, especially for transparent objects.
- Score: 23.87056600709768
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transparent objects are common in daily life. However, depth sensing for
transparent objects remains a challenging problem. While learning-based methods
can leverage shape priors to improve the sensing quality, the labor-intensive
data collection in the real world and the sim-to-real domain gap restrict these
methods' scalability. In this paper, we propose a method to finetune a stereo
network with sparse depth labels automatically collected using a probing system
with tactile feedback. We present a novel utility function to evaluate the
benefit of touches. By approximating and optimizing the utility function, we
can optimize the probing locations given a fixed touching budget to better
improve the network's performance on real objects. We further combine tactile
depth supervision with a confidence-based regularization to prevent
over-fitting during finetuning. To evaluate the effectiveness of our method, we
construct a real-world dataset including both diffuse and transparent objects.
Experimental results on this dataset show that our method can significantly
improve real-world depth sensing accuracy, especially for transparent objects.
Related papers
- SurANet: Surrounding-Aware Network for Concealed Object Detection via Highly-Efficient Interactive Contrastive Learning Strategy [55.570183323356964]
We propose a novel Surrounding-Aware Network, namely SurANet, for concealed object detection.
We enhance the semantics of feature maps using differential fusion of surrounding features to highlight concealed objects.
Next, a Surrounding-Aware Contrastive Loss is applied to identify the concealed object via learning surrounding feature maps contrastively.
arXiv Detail & Related papers (2024-10-09T13:02:50Z) - ClearDepth: Enhanced Stereo Perception of Transparent Objects for Robotic Manipulation [18.140839442955485]
We develop a vision transformer-based algorithm for stereo depth recovery of transparent objects.
Our method incorporates a parameter-aligned, domain-adaptive, and physically realistic Sim2Real simulation for efficient data generation.
Our experimental results demonstrate the model's exceptional Sim2Real generalizability in real-world scenarios.
arXiv Detail & Related papers (2024-09-13T15:44:38Z) - RFTrans: Leveraging Refractive Flow of Transparent Objects for Surface
Normal Estimation and Manipulation [50.10282876199739]
This paper introduces RFTrans, an RGB-D-based method for surface normal estimation and manipulation of transparent objects.
It integrates the RFNet, which predicts refractive flow, object mask, and boundaries, followed by the F2Net, which estimates surface normal from the refractive flow.
A real-world robot grasping task witnesses an 83% success rate, proving that refractive flow can help enable direct sim-to-real transfer.
arXiv Detail & Related papers (2023-11-21T07:19:47Z) - Transparent Object Tracking with Enhanced Fusion Module [56.403878717170784]
We propose a new tracker architecture that uses our fusion techniques to achieve superior results for transparent object tracking.
Our results and the implementation of code will be made publicly available at https://github.com/kalyan05TOTEM.
arXiv Detail & Related papers (2023-09-13T03:52:09Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - High-resolution Iterative Feedback Network for Camouflaged Object
Detection [128.893782016078]
Spotting camouflaged objects that are visually assimilated into the background is tricky for object detection algorithms.
We aim to extract the high-resolution texture details to avoid the detail degradation that causes blurred vision in edges and boundaries.
We introduce a novel HitNet to refine the low-resolution representations by high-resolution features in an iterative feedback manner.
arXiv Detail & Related papers (2022-03-22T11:20:21Z) - TransCG: A Large-Scale Real-World Dataset for Transparent Object Depth
Completion and Grasping [46.6058840385155]
We contribute a large-scale real-world dataset for transparent object depth completion.
Our dataset contains 57,715 RGB-D images from 130 different scenes.
We propose an end-to-end depth completion network, which takes the RGB image and the inaccurate depth map as inputs and outputs a refined depth map.
arXiv Detail & Related papers (2022-02-17T06:50:20Z) - Self-Guided Instance-Aware Network for Depth Completion and Enhancement [6.319531161477912]
Existing methods directly interpolate the missing depth measurements based on pixel-wise image content and the corresponding neighboring depth values.
We propose a novel self-guided instance-aware network (SG-IANet) that utilize self-guided mechanism to extract instance-level features that is needed for depth restoration.
arXiv Detail & Related papers (2021-05-25T19:41:38Z) - FakeMix Augmentation Improves Transparent Object Detection [24.540569928274984]
We propose a novel content-dependent data augmentation method termed FakeMix to overcome the boundary-related imbalance problem.
We also present AdaptiveASPP, an enhanced version of ASPP, that can capture multi-scale and cross-modality features dynamically.
arXiv Detail & Related papers (2021-03-24T15:51:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.