Leveraging RGB-D Data with Cross-Modal Context Mining for Glass Surface Detection
- URL: http://arxiv.org/abs/2206.11250v2
- Date: Mon, 16 Dec 2024 15:58:51 GMT
- Title: Leveraging RGB-D Data with Cross-Modal Context Mining for Glass Surface Detection
- Authors: Jiaying Lin, Yuen-Hei Yeung, Shuquan Ye, Rynson W. H. Lau,
- Abstract summary: Glass surfaces are becoming increasingly ubiquitous as modern buildings tend to use a lot of glass panels.
This poses substantial challenges to the operations of autonomous systems such as robots, self-driving cars, and drones.
We propose a novel glass surface detection framework combining RGB and depth information.
- Score: 47.87834602551456
- License:
- Abstract: Glass surfaces are becoming increasingly ubiquitous as modern buildings tend to use a lot of glass panels. This, however, poses substantial challenges to the operations of autonomous systems such as robots, self-driving cars, and drones, as these glass panels can become transparent obstacles to navigation. Existing works attempt to exploit various cues, including glass boundary context or reflections, as priors. However, they are all based on input RGB images. We observe that the transmission of 3D depth sensor light through glass surfaces often produces blank regions in the depth maps, which can offer additional insights to complement the RGB image features for glass surface detection. In this work, we first propose a large-scale RGB-D glass surface detection dataset, \textit{RGB-D GSD}, for rigorous experiments and future research. It contains 3,009 images, paired with precise annotations, offering a wide range of real-world RGB-D glass surface categories. We then propose a novel glass surface detection framework combining RGB and depth information, with two novel modules: a cross-modal context mining (CCM) module to adaptively learn individual and mutual context features from RGB and depth information, and a depth-missing aware attention (DAA) module to explicitly exploit spatial locations where missing depths occur to help detect the presence of glass surfaces. Experimental results show that our proposed model outperforms state-of-the-art methods.
Related papers
- 3DRef: 3D Dataset and Benchmark for Reflection Detection in RGB and
Lidar Data [0.0]
This paper introduces the first large-scale 3D reflection detection dataset containing more than 50,000 aligned samples of multi-return Lidar, RGB images, and 2D/3D semantic labels.
The proposed dataset advances reflection detection by providing a comprehensive testbed with precise global alignment, multi-modal data, and diverse reflective objects and materials.
arXiv Detail & Related papers (2024-03-11T09:29:44Z) - Large-Field Contextual Feature Learning for Glass Detection [44.222075782263175]
We propose an important problem of detecting glass surfaces from a single RGB image.
To address this problem, we construct the first large-scale glass detection dataset (GDD)
We propose a novel glass detection network, called GDNet-B, which explores abundant contextual cues in a large field-of-view.
arXiv Detail & Related papers (2022-09-10T11:08:05Z) - Mirror Complementary Transformer Network for RGB-thermal Salient Object
Detection [16.64781797503128]
RGB-thermal object detection (RGB-T SOD) aims to locate the common prominent objects of an aligned visible and thermal infrared image pair.
In this paper, we propose a novel mirror complementary Transformer network (MCNet) for RGB-T SOD.
Experiments on benchmark and VT723 datasets show that the proposed method outperforms state-of-the-art approaches.
arXiv Detail & Related papers (2022-07-07T20:26:09Z) - Beyond Visual Field of View: Perceiving 3D Environment with Echoes and
Vision [51.385731364529306]
This paper focuses on perceiving and navigating 3D environments using echoes and RGB image.
In particular, we perform depth estimation by fusing RGB image with echoes, received from multiple orientations.
We show that the echoes provide holistic and in-expensive information about the 3D structures complementing the RGB image.
arXiv Detail & Related papers (2022-07-03T22:31:47Z) - Glass Segmentation with RGB-Thermal Image Pairs [16.925196782387857]
We propose a new glass segmentation method utilizing paired RGB and thermal images.
Glass regions of a scene are made more distinguishable with a pair of RGB and thermal images than solely with an RGB image.
arXiv Detail & Related papers (2022-04-12T00:20:22Z) - RGB2Hands: Real-Time Tracking of 3D Hand Interactions from Monocular RGB
Video [76.86512780916827]
We present the first real-time method for motion capture of skeletal pose and 3D surface geometry of hands from a single RGB camera.
In order to address the inherent depth ambiguities in RGB data, we propose a novel multi-task CNN.
We experimentally verify the individual components of our RGB two-hand tracking and 3D reconstruction pipeline.
arXiv Detail & Related papers (2021-06-22T12:53:56Z) - RGB-D Local Implicit Function for Depth Completion of Transparent
Objects [43.238923881620494]
Majority of perception methods in robotics require depth information provided by RGB-D cameras.
Standard 3D sensors fail to capture depth of transparent objects due to refraction and absorption of light.
We present a novel framework that can complete missing depth given noisy RGB-D input.
arXiv Detail & Related papers (2021-04-01T17:00:04Z) - Enhanced Boundary Learning for Glass-like Object Segmentation [55.45473926510806]
This paper aims to solve the glass-like object segmentation problem via enhanced boundary learning.
In particular, we first propose a novel refined differential module for generating finer boundary cues.
An edge-aware point-based graph convolution network module is proposed to model the global shape representation along the boundary.
arXiv Detail & Related papers (2021-03-29T16:18:57Z) - Accurate RGB-D Salient Object Detection via Collaborative Learning [101.82654054191443]
RGB-D saliency detection shows impressive ability on some challenge scenarios.
We propose a novel collaborative learning framework where edge, depth and saliency are leveraged in a more efficient way.
arXiv Detail & Related papers (2020-07-23T04:33:36Z) - Is Depth Really Necessary for Salient Object Detection? [50.10888549190576]
We make the first attempt in realizing an unified depth-aware framework with only RGB information as input for inference.
Not only surpasses the state-of-the-art performances on five public RGB SOD benchmarks, but also surpasses the RGBD-based methods on five benchmarks by a large margin.
arXiv Detail & Related papers (2020-05-30T13:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.