Depth Quality Aware Salient Object Detection
- URL: http://arxiv.org/abs/2008.04159v1
- Date: Fri, 7 Aug 2020 09:54:39 GMT
- Title: Depth Quality Aware Salient Object Detection
- Authors: Chenglizhao Chen, Jipeng Wei, Chong Peng, Hong Qin
- Abstract summary: This paper attempts to integrate a novel depth quality aware into the classic bi-stream structure, aiming to assess the depth quality before conducting the selective RGB-D fusion.
Compared with the SOTA bi-stream methods, the major highlight of our method is its ability to lessen the importance of those low-quality, no-contribution, or even negative-contribution D regions during the RGB-D fusion.
- Score: 52.618404186447165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The existing fusion based RGB-D salient object detection methods usually
adopt the bi-stream structure to strike the fusion trade-off between RGB and
depth (D). The D quality usually varies from scene to scene, while the SOTA
bi-stream approaches are depth quality unaware, which easily result in
substantial difficulties in achieving complementary fusion status between RGB
and D, leading to poor fusion results in facing of low-quality D. Thus, this
paper attempts to integrate a novel depth quality aware subnet into the classic
bi-stream structure, aiming to assess the depth quality before conducting the
selective RGB-D fusion. Compared with the SOTA bi-stream methods, the major
highlight of our method is its ability to lessen the importance of those
low-quality, no-contribution, or even negative-contribution D regions during
the RGB-D fusion, achieving a much improved complementary status between RGB
and D.
Related papers
- Symmetric Uncertainty-Aware Feature Transmission for Depth
Super-Resolution [52.582632746409665]
We propose a novel Symmetric Uncertainty-aware Feature Transmission (SUFT) for color-guided DSR.
Our method achieves superior performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-06-01T06:35:59Z) - RGB-D Grasp Detection via Depth Guided Learning with Cross-modal
Attention [14.790193023912973]
This paper proposes a novel learning based approach to RGB-D grasp detection, namely Depth Guided Cross-modal Attention Network (DGCAN)
To better leverage the geometry information recorded in the depth channel, a complete 6-dimensional rectangle representation is adopted with the grasp depth dedicatedly considered.
The prediction of the extra grasp depth substantially strengthens feature learning, thereby leading to more accurate results.
arXiv Detail & Related papers (2023-02-28T02:41:27Z) - Robust RGB-D Fusion for Saliency Detection [13.705088021517568]
We propose a robust RGB-D fusion method that benefits from layer-wise and trident spatial, attention mechanisms.
Our experiments on five benchmark datasets demonstrate that the proposed fusion method performs consistently better than the state-of-the-art fusion alternatives.
arXiv Detail & Related papers (2022-08-02T21:23:00Z) - Cross-modality Discrepant Interaction Network for RGB-D Salient Object
Detection [78.47767202232298]
We propose a novel Cross-modality Discrepant Interaction Network (CDINet) for RGB-D SOD.
Two components are designed to implement the effective cross-modality interaction.
Our network outperforms $15$ state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2021-08-04T11:24:42Z) - Deep RGB-D Saliency Detection with Depth-Sensitive Attention and
Automatic Multi-Modal Fusion [15.033234579900657]
RGB-D salient object detection (SOD) is usually formulated as a problem of classification or regression over two modalities, i.e., RGB and depth.
We propose a depth-sensitive RGB feature modeling scheme using the depth-wise geometric prior of salient objects.
Experiments on seven standard benchmarks demonstrate the effectiveness of the proposed approach against the state-of-the-art.
arXiv Detail & Related papers (2021-03-22T13:28:45Z) - Knowing Depth Quality In Advance: A Depth Quality Assessment Method For
RGB-D Salient Object Detection [53.603301314081826]
We propose a simple yet effective scheme to measure D quality in advance.
The proposed D quality measurement method achieves steady performance improvements for almost 2.0% in general.
arXiv Detail & Related papers (2020-08-07T10:52:52Z) - Data-Level Recombination and Lightweight Fusion Scheme for RGB-D Salient
Object Detection [73.31632581915201]
We propose a novel data-level recombination strategy to fuse RGB with D (depth) before deep feature extraction.
A newly lightweight designed triple-stream network is applied over these novel formulated data to achieve an optimal channel-wise complementary fusion status between the RGB and D.
arXiv Detail & Related papers (2020-08-07T10:13:05Z) - DPANet: Depth Potentiality-Aware Gated Attention Network for RGB-D
Salient Object Detection [107.96418568008644]
We propose a novel network named DPANet to explicitly model the potentiality of the depth map and effectively integrate the cross-modal complementarity.
By introducing the depth potentiality perception, the network can perceive the potentiality of depth information in a learning-based manner.
arXiv Detail & Related papers (2020-03-19T07:27:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.