Revisiting Deformable Convolution for Depth Completion
- URL: http://arxiv.org/abs/2308.01905v1
- Date: Thu, 3 Aug 2023 17:59:06 GMT
- Title: Revisiting Deformable Convolution for Depth Completion
- Authors: Xinglong Sun, Jean Ponce, Yu-Xiong Wang
- Abstract summary: Depth completion aims to generate high-quality dense depth maps from sparse depth maps.
Previous work usually employs RGB images as guidance, and introduces iterative spatial propagation to refine estimated coarse depth maps.
We propose an effective architecture that leverages deformable kernel convolution as a single-pass refinement module.
- Score: 40.45231083385708
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Depth completion, which aims to generate high-quality dense depth maps from
sparse depth maps, has attracted increasing attention in recent years. Previous
work usually employs RGB images as guidance, and introduces iterative spatial
propagation to refine estimated coarse depth maps. However, most of the
propagation refinement methods require several iterations and suffer from a
fixed receptive field, which may contain irrelevant and useless information
with very sparse input. In this paper, we address these two challenges
simultaneously by revisiting the idea of deformable convolution. We propose an
effective architecture that leverages deformable kernel convolution as a
single-pass refinement module, and empirically demonstrate its superiority. To
better understand the function of deformable convolution and exploit it for
depth completion, we further systematically investigate a variety of
representative strategies. Our study reveals that, different from prior work,
deformable convolution needs to be applied on an estimated depth map with a
relatively high density for better performance. We evaluate our model on the
large-scale KITTI dataset and achieve state-of-the-art level performance in
both accuracy and inference speed. Our code is available at
https://github.com/AlexSunNik/ReDC.
Related papers
- DepthSplat: Connecting Gaussian Splatting and Depth [90.06180236292866]
In this paper, we present DepthSplat to connect Gaussian splatting and depth estimation.
We first contribute a robust multi-view depth model by leveraging pre-trained monocular depth features.
We also show that Gaussian splatting can serve as an unsupervised pre-training objective.
arXiv Detail & Related papers (2024-10-17T17:59:58Z) - Progressive Depth Decoupling and Modulating for Flexible Depth Completion [28.693100885012008]
Image-guided depth completion aims at generating a dense depth map from sparse LiDAR data and RGB image.
Recent methods have shown promising performance by reformulating it as a classification problem with two sub-tasks: depth discretization and probability prediction.
We propose a progressive depth decoupling and modulating network, which incrementally decouples the depth range into bins and adaptively generates multi-scale dense depth maps.
arXiv Detail & Related papers (2024-05-15T13:45:33Z) - LRRU: Long-short Range Recurrent Updating Networks for Depth Completion [45.48580252300282]
Long-short Range Recurrent Updating (LRRU) network is proposed to accomplish depth completion more efficiently.
LRRU first roughly fills the sparse input to obtain an initial dense depth map, and then iteratively updates it through learned spatially-variant kernels.
Our initial depth map has coarse but complete scene depth information, which helps relieve the burden of directly regressing the dense depth from sparse ones.
arXiv Detail & Related papers (2023-10-13T09:04:52Z) - Learning an Efficient Multimodal Depth Completion Model [11.740546882538142]
RGB image-guided sparse depth completion has attracted extensive attention recently, but still faces some problems.
The proposed method can outperform some state-of-the-art methods with a lightweight architecture.
The method also wins the championship in the MIPI2022 RGB+TOF depth completion challenge.
arXiv Detail & Related papers (2022-08-23T07:03:14Z) - Towards Domain-agnostic Depth Completion [28.25756709062647]
Existing depth completion methods are often targeted at a specific sparse depth type and generalize poorly across task domains.
We present a method to complete sparse/semi-dense, noisy, and potentially low-resolution depth maps obtained by various range sensors.
Our method shows superior cross-domain generalization ability against state-of-the-art depth completion methods.
arXiv Detail & Related papers (2022-07-29T04:10:22Z) - BridgeNet: A Joint Learning Network of Depth Map Super-Resolution and
Monocular Depth Estimation [60.34562823470874]
We propose a joint learning network of depth map super-resolution (DSR) and monocular depth estimation (MDE) without introducing additional supervision labels.
One is the high-frequency attention bridge (HABdg) designed for the feature encoding process, which learns the high-frequency information of the MDE task to guide the DSR task.
The other is the content guidance bridge (CGBdg) designed for the depth map reconstruction process, which provides the content guidance learned from DSR task for MDE task.
arXiv Detail & Related papers (2021-07-27T01:28:23Z) - Efficient Depth Completion Using Learned Bases [94.0808155168311]
We propose a new global geometry constraint for depth completion.
By assuming depth maps often lay on low dimensional subspaces, a dense depth map can be approximated by a weighted sum of full-resolution principal depth bases.
arXiv Detail & Related papers (2020-12-02T11:57:37Z) - Adaptive Context-Aware Multi-Modal Network for Depth Completion [107.15344488719322]
We propose to adopt the graph propagation to capture the observed spatial contexts.
We then apply the attention mechanism on the propagation, which encourages the network to model the contextual information adaptively.
Finally, we introduce the symmetric gated fusion strategy to exploit the extracted multi-modal features effectively.
Our model, named Adaptive Context-Aware Multi-Modal Network (ACMNet), achieves the state-of-the-art performance on two benchmarks.
arXiv Detail & Related papers (2020-08-25T06:00:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.