Pseudo Supervised Monocular Depth Estimation with Teacher-Student
Network
- URL: http://arxiv.org/abs/2110.11545v1
- Date: Fri, 22 Oct 2021 01:08:36 GMT
- Title: Pseudo Supervised Monocular Depth Estimation with Teacher-Student
Network
- Authors: Huan Liu, Junsong Yuan, Chen Wang, Jun Chen
- Abstract summary: We propose a new unsupervised depth estimation method based on pseudo supervision mechanism.
It strategically integrates the advantages of supervised and unsupervised monocular depth estimation.
Our experimental results demonstrate that the proposed method outperforms the state-of-the-art on the KITTI benchmark.
- Score: 90.20878165546361
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite recent improvement of supervised monocular depth estimation, the lack
of high quality pixel-wise ground truth annotations has become a major hurdle
for further progress. In this work, we propose a new unsupervised depth
estimation method based on pseudo supervision mechanism by training a
teacher-student network with knowledge distillation. It strategically
integrates the advantages of supervised and unsupervised monocular depth
estimation, as well as unsupervised binocular depth estimation. Specifically,
the teacher network takes advantage of the effectiveness of binocular depth
estimation to produce accurate disparity maps, which are then used as the
pseudo ground truth to train the student network for monocular depth
estimation. This effectively converts the problem of unsupervised learning to
supervised learning. Our extensive experimental results demonstrate that the
proposed method outperforms the state-of-the-art on the KITTI benchmark.
Related papers
- Sparse Depth-Guided Attention for Accurate Depth Completion: A
Stereo-Assisted Monitored Distillation Approach [7.902840502973506]
We introduce a stereo-based model as a teacher model to improve the accuracy of the student model for depth completion.
To provide self-supervised information, we also employ multi-view depth consistency and multi-scale minimum reprojection.
arXiv Detail & Related papers (2023-03-28T09:23:19Z) - Learning Occlusion-Aware Coarse-to-Fine Depth Map for Self-supervised
Monocular Depth Estimation [11.929584800629673]
We propose a novel network to learn an Occlusion-aware Coarse-to-Fine Depth map for self-supervised monocular depth estimation.
The proposed OCFD-Net does not only employ a discrete depth constraint for learning a coarse-level depth map, but also employ a continuous depth constraint for learning a scene depth residual.
arXiv Detail & Related papers (2022-03-21T12:43:42Z) - SelfTune: Metrically Scaled Monocular Depth Estimation through
Self-Supervised Learning [53.78813049373321]
We propose a self-supervised learning method for the pre-trained supervised monocular depth networks to enable metrically scaled depth estimation.
Our approach is useful for various applications such as mobile robot navigation and is applicable to diverse environments.
arXiv Detail & Related papers (2022-03-10T12:28:42Z) - X-Distill: Improving Self-Supervised Monocular Depth via Cross-Task
Distillation [69.9604394044652]
We propose a novel method to improve the self-supervised training of monocular depth via cross-task knowledge distillation.
During training, we utilize a pretrained semantic segmentation teacher network and transfer its semantic knowledge to the depth network.
We extensively evaluate the efficacy of our proposed approach on the KITTI benchmark and compare it with the latest state of the art.
arXiv Detail & Related papers (2021-10-24T19:47:14Z) - Weakly-Supervised Monocular Depth Estimationwith Resolution-Mismatched
Data [73.9872931307401]
We propose a novel weakly-supervised framework to train a monocular depth estimation network.
The proposed framework is composed of a sharing weight monocular depth estimation network and a depth reconstruction network for distillation.
Experimental results demonstrate that our method achieves superior performance than unsupervised and semi-supervised learning based schemes.
arXiv Detail & Related papers (2021-09-23T18:04:12Z) - Unsupervised Scale-consistent Depth Learning from Video [131.3074342883371]
We propose a monocular depth estimator SC-Depth, which requires only unlabelled videos for training.
Thanks to the capability of scale-consistent prediction, we show that our monocular-trained deep networks are readily integrated into the ORB-SLAM2 system.
The proposed hybrid Pseudo-RGBD SLAM shows compelling results in KITTI, and it generalizes well to the KAIST dataset without additional training.
arXiv Detail & Related papers (2021-05-25T02:17:56Z) - Adaptive confidence thresholding for monocular depth estimation [83.06265443599521]
We propose a new approach to leverage pseudo ground truth depth maps of stereo images generated from self-supervised stereo matching methods.
The confidence map of the pseudo ground truth depth map is estimated to mitigate performance degeneration by inaccurate pseudo depth maps.
Experimental results demonstrate superior performance to state-of-the-art monocular depth estimation methods.
arXiv Detail & Related papers (2020-09-27T13:26:16Z) - Monocular Depth Estimation Based On Deep Learning: An Overview [16.2543991384566]
Inferring depth information from a single image (monocular depth estimation) is an ill-posed problem.
Deep learning has been widely studied recently and achieved promising performance in accuracy.
In order to improve the accuracy of depth estimation, different kinds of network frameworks, loss functions and training strategies are proposed.
arXiv Detail & Related papers (2020-03-14T12:35:34Z) - FIS-Nets: Full-image Supervised Networks for Monocular Depth Estimation [14.454378082294852]
We propose a semi-supervised architecture, which combines both unsupervised framework of using image consistency and supervised framework of dense depth completion.
In the evaluation, we show that our proposed model outperforms other approaches on depth estimation.
arXiv Detail & Related papers (2020-01-19T06:04:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.