Knowing Depth Quality In Advance: A Depth Quality Assessment Method For
RGB-D Salient Object Detection
- URL: http://arxiv.org/abs/2008.04157v1
- Date: Fri, 7 Aug 2020 10:52:52 GMT
- Title: Knowing Depth Quality In Advance: A Depth Quality Assessment Method For
RGB-D Salient Object Detection
- Authors: Xuehao Wang, Shuai Li, Chenglizhao Chen, Aimin Hao, Hong Qin
- Abstract summary: We propose a simple yet effective scheme to measure D quality in advance.
The proposed D quality measurement method achieves steady performance improvements for almost 2.0% in general.
- Score: 53.603301314081826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Previous RGB-D salient object detection (SOD) methods have widely adopted
deep learning tools to automatically strike a trade-off between RGB and D
(depth), whose key rationale is to take full advantage of their complementary
nature, aiming for a much-improved SOD performance than that of using either of
them solely. However, such fully automatic fusions may not always be helpful
for the SOD task because the D quality itself usually varies from scene to
scene. It may easily lead to a suboptimal fusion result if the D quality is not
considered beforehand. Moreover, as an objective factor, the D quality has long
been overlooked by previous work. As a result, it is becoming a clear
performance bottleneck. Thus, we propose a simple yet effective scheme to
measure D quality in advance, the key idea of which is to devise a series of
features in accordance with the common attributes of high-quality D regions. To
be more concrete, we conduct D quality assessments for each image region,
following a multi-scale methodology that includes low-level edge consistency,
mid-level regional uncertainty and high-level model variance. All these
components will be computed independently and then be assembled with RGB and D
features, applied as implicit indicators, to guide the selective fusion.
Compared with the state-of-the-art fusion schemes, our method can achieve a
more reasonable fusion status between RGB and D. Specifically, the proposed D
quality measurement method achieves steady performance improvements for almost
2.0\% in general.
Related papers
- Confidence-Aware RGB-D Face Recognition via Virtual Depth Synthesis [48.59382455101753]
2D face recognition encounters challenges in unconstrained environments due to varying illumination, occlusion, and pose.
Recent studies focus on RGB-D face recognition to improve robustness by incorporating depth information.
In this work, we first construct a diverse depth dataset generated by 3D Morphable Models for depth model pre-training.
Then, we propose a domain-independent pre-training framework that utilizes readily available pre-trained RGB and depth models to separately perform face recognition without needing additional paired data for retraining.
arXiv Detail & Related papers (2024-03-11T09:12:24Z) - RGB-based Category-level Object Pose Estimation via Decoupled Metric
Scale Recovery [72.13154206106259]
We propose a novel pipeline that decouples the 6D pose and size estimation to mitigate the influence of imperfect scales on rigid transformations.
Specifically, we leverage a pre-trained monocular estimator to extract local geometric information.
A separate branch is designed to directly recover the metric scale of the object based on category-level statistics.
arXiv Detail & Related papers (2023-09-19T02:20:26Z) - Robust RGB-D Fusion for Saliency Detection [13.705088021517568]
We propose a robust RGB-D fusion method that benefits from layer-wise and trident spatial, attention mechanisms.
Our experiments on five benchmark datasets demonstrate that the proposed fusion method performs consistently better than the state-of-the-art fusion alternatives.
arXiv Detail & Related papers (2022-08-02T21:23:00Z) - Pyramidal Attention for Saliency Detection [30.554118525502115]
This paper exploits only RGB images, estimates depth from RGB, and leverages the intermediate depth features.
We employ a pyramidal attention structure to extract multi-level convolutional-transformer features to process initial stage representations.
We report significantly improved performance against 21 and 40 state-of-the-art SOD methods on eight RGB and RGB-D datasets.
arXiv Detail & Related papers (2022-04-14T06:57:46Z) - Deep RGB-D Saliency Detection with Depth-Sensitive Attention and
Automatic Multi-Modal Fusion [15.033234579900657]
RGB-D salient object detection (SOD) is usually formulated as a problem of classification or regression over two modalities, i.e., RGB and depth.
We propose a depth-sensitive RGB feature modeling scheme using the depth-wise geometric prior of salient objects.
Experiments on seven standard benchmarks demonstrate the effectiveness of the proposed approach against the state-of-the-art.
arXiv Detail & Related papers (2021-03-22T13:28:45Z) - Learning Selective Mutual Attention and Contrast for RGB-D Saliency
Detection [145.4919781325014]
How to effectively fuse cross-modal information is the key problem for RGB-D salient object detection.
Many models use the feature fusion strategy but are limited by the low-order point-to-point fusion methods.
We propose a novel mutual attention model by fusing attention and contexts from different modalities.
arXiv Detail & Related papers (2020-10-12T08:50:10Z) - Depth Quality Aware Salient Object Detection [52.618404186447165]
This paper attempts to integrate a novel depth quality aware into the classic bi-stream structure, aiming to assess the depth quality before conducting the selective RGB-D fusion.
Compared with the SOTA bi-stream methods, the major highlight of our method is its ability to lessen the importance of those low-quality, no-contribution, or even negative-contribution D regions during the RGB-D fusion.
arXiv Detail & Related papers (2020-08-07T09:54:39Z) - RGB-D Salient Object Detection: A Survey [195.83586883670358]
We provide a comprehensive survey of RGB-D based SOD models from various perspectives.
We also review SOD models and popular benchmark datasets from this domain.
We discuss several challenges and open directions of RGB-D based SOD for future research.
arXiv Detail & Related papers (2020-08-01T10:01:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.