DPANet: Depth Potentiality-Aware Gated Attention Network for RGB-D
Salient Object Detection
- URL: http://arxiv.org/abs/2003.08608v4
- Date: Tue, 29 Sep 2020 02:34:17 GMT
- Title: DPANet: Depth Potentiality-Aware Gated Attention Network for RGB-D
Salient Object Detection
- Authors: Zuyao Chen, Runmin Cong, Qianqian Xu, and Qingming Huang
- Abstract summary: We propose a novel network named DPANet to explicitly model the potentiality of the depth map and effectively integrate the cross-modal complementarity.
By introducing the depth potentiality perception, the network can perceive the potentiality of depth information in a learning-based manner.
- Score: 107.96418568008644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There are two main issues in RGB-D salient object detection: (1) how to
effectively integrate the complementarity from the cross-modal RGB-D data; (2)
how to prevent the contamination effect from the unreliable depth map. In fact,
these two problems are linked and intertwined, but the previous methods tend to
focus only on the first problem and ignore the consideration of depth map
quality, which may yield the model fall into the sub-optimal state. In this
paper, we address these two issues in a holistic model synergistically, and
propose a novel network named DPANet to explicitly model the potentiality of
the depth map and effectively integrate the cross-modal complementarity. By
introducing the depth potentiality perception, the network can perceive the
potentiality of depth information in a learning-based manner, and guide the
fusion process of two modal data to prevent the contamination occurred. The
gated multi-modality attention module in the fusion process exploits the
attention mechanism with a gate controller to capture long-range dependencies
from a cross-modal perspective. Experimental results compared with 15
state-of-the-art methods on 8 datasets demonstrate the validity of the proposed
approach both quantitatively and qualitatively.
Related papers
- Unveiling the Depths: A Multi-Modal Fusion Framework for Challenging
Scenarios [103.72094710263656]
This paper presents a novel approach that identifies and integrates dominant cross-modality depth features with a learning-based framework.
We propose a novel confidence loss steering a confidence predictor network to yield a confidence map specifying latent potential depth areas.
With the resulting confidence map, we propose a multi-modal fusion network that fuses the final depth in an end-to-end manner.
arXiv Detail & Related papers (2024-02-19T04:39:16Z) - RGB-D Grasp Detection via Depth Guided Learning with Cross-modal
Attention [14.790193023912973]
This paper proposes a novel learning based approach to RGB-D grasp detection, namely Depth Guided Cross-modal Attention Network (DGCAN)
To better leverage the geometry information recorded in the depth channel, a complete 6-dimensional rectangle representation is adopted with the grasp depth dedicatedly considered.
The prediction of the extra grasp depth substantially strengthens feature learning, thereby leading to more accurate results.
arXiv Detail & Related papers (2023-02-28T02:41:27Z) - Robust RGB-D Fusion for Saliency Detection [13.705088021517568]
We propose a robust RGB-D fusion method that benefits from layer-wise and trident spatial, attention mechanisms.
Our experiments on five benchmark datasets demonstrate that the proposed fusion method performs consistently better than the state-of-the-art fusion alternatives.
arXiv Detail & Related papers (2022-08-02T21:23:00Z) - Dual Swin-Transformer based Mutual Interactive Network for RGB-D Salient
Object Detection [67.33924278729903]
In this work, we propose Dual Swin-Transformer based Mutual Interactive Network.
We adopt Swin-Transformer as the feature extractor for both RGB and depth modality to model the long-range dependencies in visual inputs.
Comprehensive experiments on five standard RGB-D SOD benchmark datasets demonstrate the superiority of the proposed DTMINet method.
arXiv Detail & Related papers (2022-06-07T08:35:41Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - Depth-Cooperated Trimodal Network for Video Salient Object Detection [13.727763221832532]
We propose a depth-operated triOD network called DCTNet for video salient object detection (VS)
To this end, we first generate depth from RGB frames, and then propose an approach to treat the three modalities unequally.
We also introduce a refinement fusion module (RFM) to suppress noises in each modality and select useful information dynamically for further feature refinement.
arXiv Detail & Related papers (2022-02-12T13:04:16Z) - RGB-D Salient Object Detection with Ubiquitous Target Awareness [37.6726410843724]
We make the first attempt to solve the RGB-D salient object detection problem with a novel depth-awareness framework.
We propose a Ubiquitous Target Awareness (UTA) network to solve three important challenges in RGB-D SOD task.
Our proposed UTA network is depth-free for inference and runs in real-time with 43 FPS.
arXiv Detail & Related papers (2021-09-08T04:27:29Z) - Learning Selective Mutual Attention and Contrast for RGB-D Saliency
Detection [145.4919781325014]
How to effectively fuse cross-modal information is the key problem for RGB-D salient object detection.
Many models use the feature fusion strategy but are limited by the low-order point-to-point fusion methods.
We propose a novel mutual attention model by fusing attention and contexts from different modalities.
arXiv Detail & Related papers (2020-10-12T08:50:10Z) - A Single Stream Network for Robust and Real-time RGB-D Salient Object
Detection [89.88222217065858]
We design a single stream network to use the depth map to guide early fusion and middle fusion between RGB and depth.
This model is 55.5% lighter than the current lightest model and runs at a real-time speed of 32 FPS when processing a $384 times 384$ image.
arXiv Detail & Related papers (2020-07-14T04:40:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.