BridgeNet: A Joint Learning Network of Depth Map Super-Resolution and
Monocular Depth Estimation
- URL: http://arxiv.org/abs/2107.12541v1
- Date: Tue, 27 Jul 2021 01:28:23 GMT
- Title: BridgeNet: A Joint Learning Network of Depth Map Super-Resolution and
Monocular Depth Estimation
- Authors: Qi Tang, Runmin Cong, Ronghui Sheng, Lingzhi He, Dan Zhang, Yao Zhao,
and Sam Kwong
- Abstract summary: We propose a joint learning network of depth map super-resolution (DSR) and monocular depth estimation (MDE) without introducing additional supervision labels.
One is the high-frequency attention bridge (HABdg) designed for the feature encoding process, which learns the high-frequency information of the MDE task to guide the DSR task.
The other is the content guidance bridge (CGBdg) designed for the depth map reconstruction process, which provides the content guidance learned from DSR task for MDE task.
- Score: 60.34562823470874
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Depth map super-resolution is a task with high practical application
requirements in the industry. Existing color-guided depth map super-resolution
methods usually necessitate an extra branch to extract high-frequency detail
information from RGB image to guide the low-resolution depth map
reconstruction. However, because there are still some differences between the
two modalities, direct information transmission in the feature dimension or
edge map dimension cannot achieve satisfactory result, and may even trigger
texture copying in areas where the structures of the RGB-D pair are
inconsistent. Inspired by the multi-task learning, we propose a joint learning
network of depth map super-resolution (DSR) and monocular depth estimation
(MDE) without introducing additional supervision labels. For the interaction of
two subnetworks, we adopt a differentiated guidance strategy and design two
bridges correspondingly. One is the high-frequency attention bridge (HABdg)
designed for the feature encoding process, which learns the high-frequency
information of the MDE task to guide the DSR task. The other is the content
guidance bridge (CGBdg) designed for the depth map reconstruction process,
which provides the content guidance learned from DSR task for MDE task. The
entire network architecture is highly portable and can provide a paradigm for
associating the DSR and MDE tasks. Extensive experiments on benchmark datasets
demonstrate that our method achieves competitive performance. Our code and
models are available at https://rmcong.github.io/proj_BridgeNet.html.
Related papers
- DSR-Diff: Depth Map Super-Resolution with Diffusion Model [38.68563026759223]
We present a novel CDSR paradigm that utilizes a diffusion model within the latent space to generate guidance for depth map super-resolution.
Our proposed method has shown superior performance in extensive experiments when compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-11-16T14:18:10Z) - RGB-Depth Fusion GAN for Indoor Depth Completion [29.938869342958125]
In this paper, we design a novel two-branch end-to-end fusion network, which takes a pair of RGB and incomplete depth images as input to predict a dense and completed depth map.
In one branch, we propose an RGB-depth fusion GAN to transfer the RGB image to the fine-grained textured depth map.
In the other branch, we adopt adaptive fusion modules named W-AdaIN to propagate the features across the two branches.
arXiv Detail & Related papers (2022-03-21T10:26:38Z) - Joint Learning of Salient Object Detection, Depth Estimation and Contour
Extraction [91.43066633305662]
We propose a novel multi-task and multi-modal filtered transformer (MMFT) network for RGB-D salient object detection (SOD)
Specifically, we unify three complementary tasks: depth estimation, salient object detection and contour estimation. The multi-task mechanism promotes the model to learn the task-aware features from the auxiliary tasks.
Experiments show that it not only significantly surpasses the depth-based RGB-D SOD methods on multiple datasets, but also precisely predicts a high-quality depth map and salient contour at the same time.
arXiv Detail & Related papers (2022-03-09T17:20:18Z) - RGB-D Saliency Detection via Cascaded Mutual Information Minimization [122.8879596830581]
Existing RGB-D saliency detection models do not explicitly encourage RGB and depth to achieve effective multi-modal learning.
We introduce a novel multi-stage cascaded learning framework via mutual information minimization to "explicitly" model the multi-modal information between RGB image and depth data.
arXiv Detail & Related papers (2021-09-15T12:31:27Z) - Cross-modality Discrepant Interaction Network for RGB-D Salient Object
Detection [78.47767202232298]
We propose a novel Cross-modality Discrepant Interaction Network (CDINet) for RGB-D SOD.
Two components are designed to implement the effective cross-modality interaction.
Our network outperforms $15$ state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2021-08-04T11:24:42Z) - High-resolution Depth Maps Imaging via Attention-based Hierarchical
Multi-modal Fusion [84.24973877109181]
We propose a novel attention-based hierarchical multi-modal fusion network for guided DSR.
We show that our approach outperforms state-of-the-art methods in terms of reconstruction accuracy, running speed and memory efficiency.
arXiv Detail & Related papers (2021-04-04T03:28:33Z) - Learning Scene Structure Guidance via Cross-Task Knowledge Transfer for
Single Depth Super-Resolution [35.21324004883027]
Existing color-guided depth super-resolution (DSR) approaches require paired RGB-D data as training samples where the RGB image is used as structural guidance to recover the degraded depth map due to their geometrical similarity.
We explore for the first time to learn the cross-modality knowledge at training stage, where both RGB and depth modalities are available, but test on the target dataset, where only single depth modality exists.
Specifically, we construct an auxiliary depth estimation (DE) task that takes an RGB image as input to estimate a depth map, and train both DSR task and DE task collaboratively to boost the performance of
arXiv Detail & Related papers (2021-03-24T03:08:25Z) - Accurate RGB-D Salient Object Detection via Collaborative Learning [101.82654054191443]
RGB-D saliency detection shows impressive ability on some challenge scenarios.
We propose a novel collaborative learning framework where edge, depth and saliency are leveraged in a more efficient way.
arXiv Detail & Related papers (2020-07-23T04:33:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.