FIS-Nets: Full-image Supervised Networks for Monocular Depth Estimation
- URL: http://arxiv.org/abs/2001.11092v1
- Date: Sun, 19 Jan 2020 06:04:26 GMT
- Title: FIS-Nets: Full-image Supervised Networks for Monocular Depth Estimation
- Authors: Bei Wang and Jianping An
- Abstract summary: We propose a semi-supervised architecture, which combines both unsupervised framework of using image consistency and supervised framework of dense depth completion.
In the evaluation, we show that our proposed model outperforms other approaches on depth estimation.
- Score: 14.454378082294852
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the importance of full-image supervision for monocular
depth estimation. We propose a semi-supervised architecture, which combines
both unsupervised framework of using image consistency and supervised framework
of dense depth completion. The latter provides full-image depth as supervision
for the former. Ego-motion from navigation system is also embedded into the
unsupervised framework as output supervision of an inner temporal transform
network, making monocular depth estimation better. In the evaluation, we show
that our proposed model outperforms other approaches on depth estimation.
Related papers
- FusionDepth: Complement Self-Supervised Monocular Depth Estimation with
Cost Volume [9.912304015239313]
We propose a multi-frame depth estimation framework which monocular depth can be refined continuously by multi-frame sequential constraints.
Our method also enhances the interpretability when combining monocular estimation with multi-view cost volume.
arXiv Detail & Related papers (2023-05-10T10:38:38Z) - Multi-Camera Collaborative Depth Prediction via Consistent Structure
Estimation [75.99435808648784]
We propose a novel multi-camera collaborative depth prediction method.
It does not require large overlapping areas while maintaining structure consistency between cameras.
Experimental results on DDAD and NuScenes datasets demonstrate the superior performance of our method.
arXiv Detail & Related papers (2022-10-05T03:44:34Z) - Improving Monocular Visual Odometry Using Learned Depth [84.05081552443693]
We propose a framework to exploit monocular depth estimation for improving visual odometry (VO)
The core of our framework is a monocular depth estimation module with a strong generalization capability for diverse scenes.
Compared with current learning-based VO methods, our method demonstrates a stronger generalization ability to diverse scenes.
arXiv Detail & Related papers (2022-04-04T06:26:46Z) - SelfTune: Metrically Scaled Monocular Depth Estimation through
Self-Supervised Learning [53.78813049373321]
We propose a self-supervised learning method for the pre-trained supervised monocular depth networks to enable metrically scaled depth estimation.
Our approach is useful for various applications such as mobile robot navigation and is applicable to diverse environments.
arXiv Detail & Related papers (2022-03-10T12:28:42Z) - Pseudo Supervised Monocular Depth Estimation with Teacher-Student
Network [90.20878165546361]
We propose a new unsupervised depth estimation method based on pseudo supervision mechanism.
It strategically integrates the advantages of supervised and unsupervised monocular depth estimation.
Our experimental results demonstrate that the proposed method outperforms the state-of-the-art on the KITTI benchmark.
arXiv Detail & Related papers (2021-10-22T01:08:36Z) - Self-Supervised Monocular Depth Estimation with Internal Feature Fusion [12.874712571149725]
Self-supervised learning for depth estimation uses geometry in image sequences for supervision.
We propose a novel depth estimation networkDIFFNet, which can make use of semantic information in down and upsampling procedures.
arXiv Detail & Related papers (2021-10-18T17:31:11Z) - Weakly-Supervised Monocular Depth Estimationwith Resolution-Mismatched
Data [73.9872931307401]
We propose a novel weakly-supervised framework to train a monocular depth estimation network.
The proposed framework is composed of a sharing weight monocular depth estimation network and a depth reconstruction network for distillation.
Experimental results demonstrate that our method achieves superior performance than unsupervised and semi-supervised learning based schemes.
arXiv Detail & Related papers (2021-09-23T18:04:12Z) - Adaptive confidence thresholding for monocular depth estimation [83.06265443599521]
We propose a new approach to leverage pseudo ground truth depth maps of stereo images generated from self-supervised stereo matching methods.
The confidence map of the pseudo ground truth depth map is estimated to mitigate performance degeneration by inaccurate pseudo depth maps.
Experimental results demonstrate superior performance to state-of-the-art monocular depth estimation methods.
arXiv Detail & Related papers (2020-09-27T13:26:16Z) - Masked GANs for Unsupervised Depth and Pose Prediction with Scale
Consistency [18.10657948047875]
This paper proposes a masked generative adversarial network (GAN) for unsupervised monocular depth and ego-motion estimation.
The MaskNet and Boolean mask scheme are designed in this framework to eliminate the effects of occlusions and impacts of visual field changes on the reconstruction loss and adversarial loss.
arXiv Detail & Related papers (2020-04-09T03:12:52Z) - The Edge of Depth: Explicit Constraints between Segmentation and Depth [25.232436455640716]
We study the mutual benefits of two common computer vision tasks, self-supervised depth estimation and semantic segmentation from images.
We propose to explicitly measure the border consistency between segmentation and depth and minimize it.
Through extensive experiments, our proposed approach advances the state of the art on unsupervised monocular depth estimation in the KITTI.
arXiv Detail & Related papers (2020-04-01T00:03:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.