Multi-Scale Progressive Fusion Learning for Depth Map Super-Resolution
- URL: http://arxiv.org/abs/2011.11865v1
- Date: Tue, 24 Nov 2020 03:03:07 GMT
- Title: Multi-Scale Progressive Fusion Learning for Depth Map Super-Resolution
- Authors: Chuhua Xian, Kun Qian, Zitian Zhang, and Charlie C.L. Wang
- Abstract summary: The resolution of depth map collected by depth camera is often lower than that of its associated RGB camera.
A major problem with depth map super-resolution is that there will be obvious jagged edges and excessive loss of details.
We propose a multi-scale progressive fusion network for depth map SR, which possess an structure to integrate hierarchical features in different domains.
- Score: 11.072332820377612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Limited by the cost and technology, the resolution of depth map collected by
depth camera is often lower than that of its associated RGB camera. Although
there have been many researches on RGB image super-resolution (SR), a major
problem with depth map super-resolution is that there will be obvious jagged
edges and excessive loss of details. To tackle these difficulties, in this
work, we propose a multi-scale progressive fusion network for depth map SR,
which possess an asymptotic structure to integrate hierarchical features in
different domains. Given a low-resolution (LR) depth map and its associated
high-resolution (HR) color image, We utilize two different branches to achieve
multi-scale feature learning. Next, we propose a step-wise fusion strategy to
restore the HR depth map. Finally, a multi-dimensional loss is introduced to
constrain clear boundaries and details. Extensive experiments show that our
proposed method produces improved results against state-of-the-art methods both
qualitatively and quantitatively.
Related papers
- Decoupling Fine Detail and Global Geometry for Compressed Depth Map Super-Resolution [55.9977636042469]
Bit-depth compression produces a uniform depth representation in regions with subtle variations, hindering the recovery of detailed information.
densely distributed random noise reduces the accuracy of estimating the global geometric structure of the scene.
We propose a novel framework, termed geometry-decoupled network (GDNet), for compressed depth map super-resolution.
arXiv Detail & Related papers (2024-11-05T16:37:30Z) - Symmetric Uncertainty-Aware Feature Transmission for Depth
Super-Resolution [52.582632746409665]
We propose a novel Symmetric Uncertainty-aware Feature Transmission (SUFT) for color-guided DSR.
Our method achieves superior performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-06-01T06:35:59Z) - Guided Depth Super-Resolution by Deep Anisotropic Diffusion [18.445649181582823]
We propose a novel approach which combines guided anisotropic diffusion with a deep convolutional network.
We achieve unprecedented results in three commonly used benchmarks for guided depth super-resolution.
arXiv Detail & Related papers (2022-11-21T15:48:13Z) - Weakly-Supervised Monocular Depth Estimationwith Resolution-Mismatched
Data [73.9872931307401]
We propose a novel weakly-supervised framework to train a monocular depth estimation network.
The proposed framework is composed of a sharing weight monocular depth estimation network and a depth reconstruction network for distillation.
Experimental results demonstrate that our method achieves superior performance than unsupervised and semi-supervised learning based schemes.
arXiv Detail & Related papers (2021-09-23T18:04:12Z) - BridgeNet: A Joint Learning Network of Depth Map Super-Resolution and
Monocular Depth Estimation [60.34562823470874]
We propose a joint learning network of depth map super-resolution (DSR) and monocular depth estimation (MDE) without introducing additional supervision labels.
One is the high-frequency attention bridge (HABdg) designed for the feature encoding process, which learns the high-frequency information of the MDE task to guide the DSR task.
The other is the content guidance bridge (CGBdg) designed for the depth map reconstruction process, which provides the content guidance learned from DSR task for MDE task.
arXiv Detail & Related papers (2021-07-27T01:28:23Z) - Towards Fast and Accurate Real-World Depth Super-Resolution: Benchmark
Dataset and Baseline [48.69396457721544]
We build a large-scale dataset named "RGB-D-D" to promote the study of depth map super-resolution (SR)
We provide a fast depth map super-resolution (FDSR) baseline, in which the high-frequency component adaptively decomposed from RGB image to guide the depth map SR.
For the real-world LR depth maps, our algorithm can produce more accurate HR depth maps with clearer boundaries and to some extent correct the depth value errors.
arXiv Detail & Related papers (2021-04-13T13:27:26Z) - High-resolution Depth Maps Imaging via Attention-based Hierarchical
Multi-modal Fusion [84.24973877109181]
We propose a novel attention-based hierarchical multi-modal fusion network for guided DSR.
We show that our approach outperforms state-of-the-art methods in terms of reconstruction accuracy, running speed and memory efficiency.
arXiv Detail & Related papers (2021-04-04T03:28:33Z) - Multimodal Deep Unfolding for Guided Image Super-Resolution [23.48305854574444]
Deep learning methods rely on training data to learn an end-to-end mapping from a low-resolution input to a high-resolution output.
We propose a multimodal deep learning design that incorporates sparse priors and allows the effective integration of information from another image modality into the network architecture.
Our solution relies on a novel deep unfolding operator, performing steps similar to an iterative algorithm for convolutional sparse coding with side information.
arXiv Detail & Related papers (2020-01-21T14:41:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.