Deep feature fusion for self-supervised monocular depth prediction
- URL: http://arxiv.org/abs/2005.07922v1
- Date: Sat, 16 May 2020 09:42:36 GMT
- Title: Deep feature fusion for self-supervised monocular depth prediction
- Authors: Vinay Kaushik, Brejesh Lall
- Abstract summary: We propose a deep feature fusion method for learning self-supervised depth from scratch.
Our fusion network selects features from both upper and lower levels at every level in the encoder network.
We also propose a refinement module learning higher scale residual depth from a combination of higher level deep features and lower level residual depth.
- Score: 7.779007880126907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in end-to-end unsupervised learning has significantly
improved the performance of monocular depth prediction and alleviated the
requirement of ground truth depth. Although a plethora of work has been done in
enforcing various structural constraints by incorporating multiple losses
utilising smoothness, left-right consistency, regularisation and matching
surface normals, a few of them take into consideration multi-scale structures
present in real world images. Most works utilise a VGG16 or ResNet50 model
pre-trained on ImageNet weights for predicting depth. We propose a deep feature
fusion method utilising features at multiple scales for learning
self-supervised depth from scratch. Our fusion network selects features from
both upper and lower levels at every level in the encoder network, thereby
creating multiple feature pyramid sub-networks that are fed to the decoder
after applying the CoordConv solution. We also propose a refinement module
learning higher scale residual depth from a combination of higher level deep
features and lower level residual depth using a pixel shuffling framework that
super-resolves lower level residual depth. We select the KITTI dataset for
evaluation and show that our proposed architecture can produce better or
comparable results in depth prediction.
Related papers
- Scale Propagation Network for Generalizable Depth Completion [16.733495588009184]
We propose a novel scale propagation normalization (SP-Norm) method to propagate scales from input to output.
We also develop a new network architecture based on SP-Norm and the ConvNeXt V2 backbone.
Our model consistently achieves the best accuracy with faster speed and lower memory when compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-10-24T03:53:06Z) - DepthSplat: Connecting Gaussian Splatting and Depth [90.06180236292866]
We present DepthSplat to connect Gaussian splatting and depth estimation.
We first contribute a robust multi-view depth model by leveraging pre-trained monocular depth features.
We also show that Gaussian splatting can serve as an unsupervised pre-training objective.
arXiv Detail & Related papers (2024-10-17T17:59:58Z) - Single Image Depth Prediction Made Better: A Multivariate Gaussian Take [163.14849753700682]
We introduce an approach that performs continuous modeling of per-pixel depth.
Our method's accuracy (named MG) is among the top on the KITTI depth-prediction benchmark leaderboard.
arXiv Detail & Related papers (2023-03-31T16:01:03Z) - SwinDepth: Unsupervised Depth Estimation using Monocular Sequences via
Swin Transformer and Densely Cascaded Network [29.798579906253696]
It is challenging to acquire dense ground truth depth labels for supervised training, and the unsupervised depth estimation using monocular sequences emerges as a promising alternative.
In this paper, we employ a convolution-free Swin Transformer as an image feature extractor so that the network can capture both local geometric features and global semantic features for depth estimation.
Also, we propose a Densely Cascaded Multi-scale Network (DCMNet) that connects every feature map directly with another from different scales via a top-down cascade pathway.
arXiv Detail & Related papers (2023-01-17T06:01:46Z) - Lightweight Monocular Depth Estimation with an Edge Guided Network [34.03711454383413]
We present a novel lightweight Edge Guided Depth Estimation Network (EGD-Net)
In particular, we start out with a lightweight encoder-decoder architecture and embed an edge guidance branch.
In order to aggregate the context information and edge attention features, we design a transformer-based feature aggregation module.
arXiv Detail & Related papers (2022-09-29T14:45:47Z) - 3DVNet: Multi-View Depth Prediction and Volumetric Refinement [68.68537312256144]
3DVNet is a novel multi-view stereo (MVS) depth-prediction method.
Our key idea is the use of a 3D scene-modeling network that iteratively updates a set of coarse depth predictions.
We show that our method exceeds state-of-the-art accuracy in both depth prediction and 3D reconstruction metrics.
arXiv Detail & Related papers (2021-12-01T00:52:42Z) - PLADE-Net: Towards Pixel-Level Accuracy for Self-Supervised Single-View
Depth Estimation with Neural Positional Encoding and Distilled Matting Loss [49.66736599668501]
We propose a self-supervised single-view pixel-level accurate depth estimation network, called PLADE-Net.
Our method shows unprecedented accuracy levels, exceeding 95% in terms of the $delta1$ metric on the KITTI dataset.
arXiv Detail & Related papers (2021-03-12T15:54:46Z) - ADAADepth: Adapting Data Augmentation and Attention for Self-Supervised
Monocular Depth Estimation [8.827921242078881]
We propose ADAA, utilising depth augmentation as depth supervision for learning accurate and robust depth.
We propose a relational self-attention module that learns rich contextual features and further enhances depth results.
We evaluate our predicted depth on the KITTI driving dataset and achieve state-of-the-art results.
arXiv Detail & Related papers (2021-03-01T09:06:55Z) - Towards Better Generalization: Joint Depth-Pose Learning without PoseNet [36.414471128890284]
We tackle the essential problem of scale inconsistency for self-supervised joint depth-pose learning.
Most existing methods assume that a consistent scale of depth and pose can be learned across all input samples.
We propose a novel system that explicitly disentangles scale from the network estimation.
arXiv Detail & Related papers (2020-04-03T00:28:09Z) - Occlusion-Aware Depth Estimation with Adaptive Normal Constraints [85.44842683936471]
We present a new learning-based method for multi-frame depth estimation from a color video.
Our method outperforms the state-of-the-art in terms of depth estimation accuracy.
arXiv Detail & Related papers (2020-04-02T07:10:45Z) - Don't Forget The Past: Recurrent Depth Estimation from Monocular Video [92.84498980104424]
We put three different types of depth estimation into a common framework.
Our method produces a time series of depth maps.
It can be applied to monocular videos only or be combined with different types of sparse depth patterns.
arXiv Detail & Related papers (2020-01-08T16:50:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.