DenseFormer: Learning Dense Depth Map from Sparse Depth and Image via Conditional Diffusion Model
- URL: http://arxiv.org/abs/2503.23993v1
- Date: Mon, 31 Mar 2025 12:11:01 GMT
- Title: DenseFormer: Learning Dense Depth Map from Sparse Depth and Image via Conditional Diffusion Model
- Authors: Ming Yuan, Sichao Wang, Chuang Zhang, Lei He, Qing Xu, Jianqiang Wang,
- Abstract summary: We propose DenseFormer, a novel method that integrates the diffusion model into the depth completion task.<n>DenseFormer generates the dense depth map by progressively refining an initial random depth distribution through multiple iterations.<n>This paper presents a depth refinement module that applies multi-step iterative refinement across various ranges to the dense depth results generated by the diffusion process.
- Score: 18.694510415777632
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The depth completion task is a critical problem in autonomous driving, involving the generation of dense depth maps from sparse depth maps and RGB images. Most existing methods employ a spatial propagation network to iteratively refine the depth map after obtaining an initial dense depth. In this paper, we propose DenseFormer, a novel method that integrates the diffusion model into the depth completion task. By incorporating the denoising mechanism of the diffusion model, DenseFormer generates the dense depth map by progressively refining an initial random depth distribution through multiple iterations. We propose a feature extraction module that leverages a feature pyramid structure, along with multi-layer deformable attention, to effectively extract and integrate features from sparse depth maps and RGB images, which serve as the guiding condition for the diffusion process. Additionally, this paper presents a depth refinement module that applies multi-step iterative refinement across various ranges to the dense depth results generated by the diffusion process. The module utilizes image features enriched with multi-scale information and sparse depth input to further enhance the accuracy of the predicted depth map. Extensive experiments on the KITTI outdoor scene dataset demonstrate that DenseFormer outperforms classical depth completion methods.
Related papers
- TransDiff: Diffusion-Based Method for Manipulating Transparent Objects Using a Single RGB-D Image [9.242427101416226]
We propose a single-view RGB-D-based depth completion framework, TransDiff, to achieve material-agnostic object grasping in desktop.
We leverage features extracted from RGB images, including semantic segmentation, edge maps, and normal maps, to condition the depth map generation process.
Our method learns an iterative denoising process that transforms a random depth distribution into a depth map, guided by initially refined depth information.
arXiv Detail & Related papers (2025-03-17T03:29:37Z) - Marigold-DC: Zero-Shot Monocular Depth Completion with Guided Diffusion [51.69876947593144]
Existing methods for depth completion operate in tightly constrained settings.<n>Inspired by advances in monocular depth estimation, we reframe depth completion as an image-conditional depth map generation.<n>Marigold-DC builds on a pretrained latent diffusion model for monocular depth estimation and injects the depth observations as test-time guidance.
arXiv Detail & Related papers (2024-12-18T00:06:41Z) - Pixel-Aligned Multi-View Generation with Depth Guided Decoder [86.1813201212539]
We propose a novel method for pixel-level image-to-multi-view generation.
Unlike prior work, we incorporate attention layers across multi-view images in the VAE decoder of a latent video diffusion model.
Our model enables better pixel alignment across multi-view images.
arXiv Detail & Related papers (2024-08-26T04:56:41Z) - Depth-guided Texture Diffusion for Image Semantic Segmentation [47.46257473475867]
We introduce a Depth-guided Texture Diffusion approach that effectively tackles the outlined challenge.
Our method extracts low-level features from edges and textures to create a texture image.
By integrating this enriched depth map with the original RGB image into a joint feature embedding, our method effectively bridges the disparity between the depth map and the image.
arXiv Detail & Related papers (2024-08-17T04:55:03Z) - Progressive Depth Decoupling and Modulating for Flexible Depth Completion [28.693100885012008]
Image-guided depth completion aims at generating a dense depth map from sparse LiDAR data and RGB image.
Recent methods have shown promising performance by reformulating it as a classification problem with two sub-tasks: depth discretization and probability prediction.
We propose a progressive depth decoupling and modulating network, which incrementally decouples the depth range into bins and adaptively generates multi-scale dense depth maps.
arXiv Detail & Related papers (2024-05-15T13:45:33Z) - DiffusionDepth: Diffusion Denoising Approach for Monocular Depth
Estimation [23.22005119986485]
DiffusionDepth is a new approach that reformulates monocular depth estimation as a denoising diffusion process.
It learns an iterative denoising process to denoise' random depth distribution into a depth map with the guidance of monocular visual conditions.
Experimental results on KITTI and NYU-Depth-V2 datasets suggest that a simple yet efficient diffusion approach could reach state-of-the-art performance in both indoor and outdoor scenarios with acceptable inference time.
arXiv Detail & Related papers (2023-03-09T03:48:24Z) - Towards Domain-agnostic Depth Completion [28.25756709062647]
Existing depth completion methods are often targeted at a specific sparse depth type and generalize poorly across task domains.
We present a method to complete sparse/semi-dense, noisy, and potentially low-resolution depth maps obtained by various range sensors.
Our method shows superior cross-domain generalization ability against state-of-the-art depth completion methods.
arXiv Detail & Related papers (2022-07-29T04:10:22Z) - Joint Learning of Salient Object Detection, Depth Estimation and Contour
Extraction [91.43066633305662]
We propose a novel multi-task and multi-modal filtered transformer (MMFT) network for RGB-D salient object detection (SOD)
Specifically, we unify three complementary tasks: depth estimation, salient object detection and contour estimation. The multi-task mechanism promotes the model to learn the task-aware features from the auxiliary tasks.
Experiments show that it not only significantly surpasses the depth-based RGB-D SOD methods on multiple datasets, but also precisely predicts a high-quality depth map and salient contour at the same time.
arXiv Detail & Related papers (2022-03-09T17:20:18Z) - FCFR-Net: Feature Fusion based Coarse-to-Fine Residual Learning for
Monocular Depth Completion [15.01291779855834]
Recent approaches mainly formulate the depth completion as a one-stage end-to-end learning task.
We propose a novel end-to-end residual learning framework, which formulates the depth completion as a two-stage learning task.
arXiv Detail & Related papers (2020-12-15T13:09:56Z) - Depth Completion using Piecewise Planar Model [94.0808155168311]
A depth map can be represented by a set of learned bases and can be efficiently solved in a closed form solution.
However, one issue with this method is that it may create artifacts when colour boundaries are inconsistent with depth boundaries.
We enforce a more strict model in depth recovery: a piece-wise planar model.
arXiv Detail & Related papers (2020-12-06T07:11:46Z) - Efficient Depth Completion Using Learned Bases [94.0808155168311]
We propose a new global geometry constraint for depth completion.
By assuming depth maps often lay on low dimensional subspaces, a dense depth map can be approximated by a weighted sum of full-resolution principal depth bases.
arXiv Detail & Related papers (2020-12-02T11:57:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.