FusionDepth: Complement Self-Supervised Monocular Depth Estimation with
Cost Volume
- URL: http://arxiv.org/abs/2305.06036v1
- Date: Wed, 10 May 2023 10:38:38 GMT
- Title: FusionDepth: Complement Self-Supervised Monocular Depth Estimation with
Cost Volume
- Authors: Zhuofei Huang, Jianlin Liu, Shang Xu, Ying Chen, Yong Liu
- Abstract summary: We propose a multi-frame depth estimation framework which monocular depth can be refined continuously by multi-frame sequential constraints.
Our method also enhances the interpretability when combining monocular estimation with multi-view cost volume.
- Score: 9.912304015239313
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-view stereo depth estimation based on cost volume usually works better
than self-supervised monocular depth estimation except for moving objects and
low-textured surfaces. So in this paper, we propose a multi-frame depth
estimation framework which monocular depth can be refined continuously by
multi-frame sequential constraints, leveraging a Bayesian fusion layer within
several iterations. Both monocular and multi-view networks can be trained with
no depth supervision. Our method also enhances the interpretability when
combining monocular estimation with multi-view cost volume. Detailed
experiments show that our method surpasses state-of-the-art unsupervised
methods utilizing single or multiple frames at test time on KITTI benchmark.
Related papers
- Exploring the Mutual Influence between Self-Supervised Single-Frame and
Multi-Frame Depth Estimation [10.872396009088595]
We propose a novel self-supervised training framework for single-frame and multi-frame depth estimation.
We first introduce a pixel-wise adaptive depth sampling module guided by single-frame depth to train the multi-frame model.
We then leverage the minimum reprojection based distillation loss to transfer the knowledge from the multi-frame depth network to the single-frame network.
arXiv Detail & Related papers (2023-04-25T09:39:30Z) - Learning to Fuse Monocular and Multi-view Cues for Multi-frame Depth
Estimation in Dynamic Scenes [51.20150148066458]
We propose a novel method to learn to fuse the multi-view and monocular cues encoded as volumes without needing the generalizationally crafted masks.
Experiments on real-world datasets prove the significant effectiveness and ability of the proposed method.
arXiv Detail & Related papers (2023-04-18T13:55:24Z) - Multi-Camera Collaborative Depth Prediction via Consistent Structure
Estimation [75.99435808648784]
We propose a novel multi-camera collaborative depth prediction method.
It does not require large overlapping areas while maintaining structure consistency between cameras.
Experimental results on DDAD and NuScenes datasets demonstrate the superior performance of our method.
arXiv Detail & Related papers (2022-10-05T03:44:34Z) - SurroundDepth: Entangling Surrounding Views for Self-Supervised
Multi-Camera Depth Estimation [101.55622133406446]
We propose a SurroundDepth method to incorporate the information from multiple surrounding views to predict depth maps across cameras.
Specifically, we employ a joint network to process all the surrounding views and propose a cross-view transformer to effectively fuse the information from multiple views.
In experiments, our method achieves the state-of-the-art performance on the challenging multi-camera depth estimation datasets.
arXiv Detail & Related papers (2022-04-07T17:58:47Z) - Improving Monocular Visual Odometry Using Learned Depth [84.05081552443693]
We propose a framework to exploit monocular depth estimation for improving visual odometry (VO)
The core of our framework is a monocular depth estimation module with a strong generalization capability for diverse scenes.
Compared with current learning-based VO methods, our method demonstrates a stronger generalization ability to diverse scenes.
arXiv Detail & Related papers (2022-04-04T06:26:46Z) - SelfTune: Metrically Scaled Monocular Depth Estimation through
Self-Supervised Learning [53.78813049373321]
We propose a self-supervised learning method for the pre-trained supervised monocular depth networks to enable metrically scaled depth estimation.
Our approach is useful for various applications such as mobile robot navigation and is applicable to diverse environments.
arXiv Detail & Related papers (2022-03-10T12:28:42Z) - Multi-View Depth Estimation by Fusing Single-View Depth Probability with
Multi-View Geometry [25.003116148843525]
We propose MaGNet, a framework for fusing single-view depth probability with multi-view geometry.
MaGNet achieves state-of-the-art performance on ScanNet, 7-Scenes and KITTI.
arXiv Detail & Related papers (2021-12-15T14:56:53Z) - Scale-aware direct monocular odometry [4.111899441919165]
We present a framework for direct monocular odometry based on depth prediction from a deep neural network.
Our proposal largely outperforms classic monocular SLAM, being 5 to 9 times more precise, with an accuracy which is closer to that of stereo systems.
arXiv Detail & Related papers (2021-09-21T10:30:15Z) - The Temporal Opportunist: Self-Supervised Multi-Frame Monocular Depth [28.06671063873351]
ManyDepth is an adaptive approach to dense depth estimation.
We present a novel consistency loss that encourages the network to ignore the cost volume when it is deemed unreliable.
arXiv Detail & Related papers (2021-04-29T17:53:42Z) - FIS-Nets: Full-image Supervised Networks for Monocular Depth Estimation [14.454378082294852]
We propose a semi-supervised architecture, which combines both unsupervised framework of using image consistency and supervised framework of dense depth completion.
In the evaluation, we show that our proposed model outperforms other approaches on depth estimation.
arXiv Detail & Related papers (2020-01-19T06:04:26Z) - Don't Forget The Past: Recurrent Depth Estimation from Monocular Video [92.84498980104424]
We put three different types of depth estimation into a common framework.
Our method produces a time series of depth maps.
It can be applied to monocular videos only or be combined with different types of sparse depth patterns.
arXiv Detail & Related papers (2020-01-08T16:50:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.