SteeredMarigold: Steering Diffusion Towards Depth Completion of Largely Incomplete Depth Maps
- URL: http://arxiv.org/abs/2409.10202v1
- Date: Mon, 16 Sep 2024 11:52:13 GMT
- Title: SteeredMarigold: Steering Diffusion Towards Depth Completion of Largely Incomplete Depth Maps
- Authors: Jakub Gregorek, Lazaros Nalpantidis,
- Abstract summary: SteeredMarigold is a training-free, zero-shot depth completion method.
It produces metric dense depth even for largely incomplete depth maps.
Our code will be publicly available.
- Score: 3.399289369740637
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Even if the depth maps captured by RGB-D sensors deployed in real environments are often characterized by large areas missing valid depth measurements, the vast majority of depth completion methods still assumes depth values covering all areas of the scene. To address this limitation, we introduce SteeredMarigold, a training-free, zero-shot depth completion method capable of producing metric dense depth, even for largely incomplete depth maps. SteeredMarigold achieves this by using the available sparse depth points as conditions to steer a denoising diffusion probabilistic model. Our method outperforms relevant top-performing methods on the NYUv2 dataset, in tests where no depth was provided for a large area, achieving state-of-art performance and exhibiting remarkable robustness against depth map incompleteness. Our code will be publicly available.
Related papers
- RDFC-GAN: RGB-Depth Fusion CycleGAN for Indoor Depth Completion [28.634851863097953]
We propose a novel two-branch end-to-end fusion network named RDFC-GAN.
It takes a pair of RGB and incomplete depth images as input to predict a dense and completed depth map.
The first branch employs an encoder-decoder structure, by adhering to the Manhattan world assumption.
The other branch applies an RGB-depth fusion CycleGAN, adept at translating RGB imagery into detailed, textured depth maps.
arXiv Detail & Related papers (2023-06-06T11:03:05Z) - SparseNeRF: Distilling Depth Ranking for Few-shot Novel View Synthesis [93.46963803030935]
We present a new Sparse-view NeRF (SparseNeRF) framework that exploits depth priors from real-world inaccurate observations.
We propose a simple yet effective constraint, a local depth ranking method, on NeRFs such that the expected depth ranking of the NeRF is consistent with that of the coarse depth maps in local patches.
We also collect a new dataset NVS-RGBD that contains real-world depth maps from Azure Kinect, ZED 2, and iPhone 13 Pro.
arXiv Detail & Related papers (2023-03-28T17:58:05Z) - DesNet: Decomposed Scale-Consistent Network for Unsupervised Depth
Completion [28.91716162403531]
Unsupervised depth completion aims to recover dense depth from the sparse one without using the ground-truth annotation.
We propose the scale-consistent learning (DSCL) strategy, which disintegrates the absolute depth into relative depth prediction and global scale estimation.
Our approach achieves state-of-the-art performance on indoor NYUv2 dataset.
arXiv Detail & Related papers (2022-11-20T14:56:18Z) - RGB-Depth Fusion GAN for Indoor Depth Completion [29.938869342958125]
In this paper, we design a novel two-branch end-to-end fusion network, which takes a pair of RGB and incomplete depth images as input to predict a dense and completed depth map.
In one branch, we propose an RGB-depth fusion GAN to transfer the RGB image to the fine-grained textured depth map.
In the other branch, we adopt adaptive fusion modules named W-AdaIN to propagate the features across the two branches.
arXiv Detail & Related papers (2022-03-21T10:26:38Z) - Robust Depth Completion with Uncertainty-Driven Loss Functions [60.9237639890582]
We introduce uncertainty-driven loss functions to improve the robustness of depth completion and handle the uncertainty in depth completion.
Our method has been tested on KITTI Depth Completion Benchmark and achieved the state-of-the-art robustness performance in terms of MAE, IMAE, and IRMSE metrics.
arXiv Detail & Related papers (2021-12-15T05:22:34Z) - DnD: Dense Depth Estimation in Crowded Dynamic Indoor Scenes [68.38952377590499]
We present a novel approach for estimating depth from a monocular camera as it moves through complex indoor environments.
Our approach predicts absolute scale depth maps over the entire scene consisting of a static background and multiple moving people.
arXiv Detail & Related papers (2021-08-12T09:12:39Z) - Towards Fast and Accurate Real-World Depth Super-Resolution: Benchmark
Dataset and Baseline [48.69396457721544]
We build a large-scale dataset named "RGB-D-D" to promote the study of depth map super-resolution (SR)
We provide a fast depth map super-resolution (FDSR) baseline, in which the high-frequency component adaptively decomposed from RGB image to guide the depth map SR.
For the real-world LR depth maps, our algorithm can produce more accurate HR depth maps with clearer boundaries and to some extent correct the depth value errors.
arXiv Detail & Related papers (2021-04-13T13:27:26Z) - FCFR-Net: Feature Fusion based Coarse-to-Fine Residual Learning for
Monocular Depth Completion [15.01291779855834]
Recent approaches mainly formulate the depth completion as a one-stage end-to-end learning task.
We propose a novel end-to-end residual learning framework, which formulates the depth completion as a two-stage learning task.
arXiv Detail & Related papers (2020-12-15T13:09:56Z) - Efficient Depth Completion Using Learned Bases [94.0808155168311]
We propose a new global geometry constraint for depth completion.
By assuming depth maps often lay on low dimensional subspaces, a dense depth map can be approximated by a weighted sum of full-resolution principal depth bases.
arXiv Detail & Related papers (2020-12-02T11:57:37Z) - Occlusion-Aware Depth Estimation with Adaptive Normal Constraints [85.44842683936471]
We present a new learning-based method for multi-frame depth estimation from a color video.
Our method outperforms the state-of-the-art in terms of depth estimation accuracy.
arXiv Detail & Related papers (2020-04-02T07:10:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.