An Adaptive Framework for Learning Unsupervised Depth Completion
- URL: http://arxiv.org/abs/2106.03010v1
- Date: Sun, 6 Jun 2021 02:27:55 GMT
- Title: An Adaptive Framework for Learning Unsupervised Depth Completion
- Authors: Alex Wong, Xiaohan Fei, Byung-Woo Hong, and Stefano Soatto
- Abstract summary: We present a method to infer a dense depth map from a color image and associated sparse depth measurements.
We show that regularization and co-visibility are related via the fitness of the model to data and can be unified into a single framework.
- Score: 59.17364202590475
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a method to infer a dense depth map from a color image and
associated sparse depth measurements. Our main contribution lies in the design
of an annealing process for determining co-visibility (occlusions,
disocclusions) and the degree of regularization to impose on the model. We show
that regularization and co-visibility are related via the fitness (residual) of
model to data and both can be unified into a single framework to improve the
learning process. Our method is an adaptive weighting scheme that guides
optimization by measuring the residual at each pixel location over each
training step for (i) estimating a soft visibility mask and (ii) determining
the amount of regularization. We demonstrate the effectiveness our method by
applying it to several recent unsupervised depth completion methods and
improving their performance on public benchmark datasets, without incurring
additional trainable parameters or increase in inference time. Code available
at: https://github.com/alexklwong/adaframe-depth-completion.
Related papers
- Revisiting Disparity from Dual-Pixel Images: Physics-Informed Lightweight Depth Estimation [3.6337378417255177]
We propose a lightweight disparity estimation method based on a completion-based network.
By modeling the DP-specific disparity error parametrically and using it for sampling during training, the network acquires the unique properties of DP.
As a result, the proposed method achieved state-of-the-art results while reducing the overall system size to 1/5 of that of the conventional method.
arXiv Detail & Related papers (2024-11-06T09:03:53Z) - UnCLe: Unsupervised Continual Learning of Depth Completion [5.677777151863184]
UnCLe is a standardized benchmark for Unsupervised Continual Learning of a multimodal depth estimation task.
We benchmark depth completion models under the practical scenario of unsupervised learning over continuous streams of data.
arXiv Detail & Related papers (2024-10-23T17:56:33Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Unsupervised Scale-consistent Depth Learning from Video [131.3074342883371]
We propose a monocular depth estimator SC-Depth, which requires only unlabelled videos for training.
Thanks to the capability of scale-consistent prediction, we show that our monocular-trained deep networks are readily integrated into the ORB-SLAM2 system.
The proposed hybrid Pseudo-RGBD SLAM shows compelling results in KITTI, and it generalizes well to the KAIST dataset without additional training.
arXiv Detail & Related papers (2021-05-25T02:17:56Z) - Domain Adaptive Semantic Segmentation with Self-Supervised Depth
Estimation [84.34227665232281]
Domain adaptation for semantic segmentation aims to improve the model performance in the presence of a distribution shift between source and target domain.
We leverage the guidance from self-supervised depth estimation, which is available on both domains, to bridge the domain gap.
We demonstrate the effectiveness of our proposed approach on the benchmark tasks SYNTHIA-to-Cityscapes and GTA-to-Cityscapes.
arXiv Detail & Related papers (2021-04-28T07:47:36Z) - Distribution Alignment: A Unified Framework for Long-tail Visual
Recognition [52.36728157779307]
We propose a unified distribution alignment strategy for long-tail visual recognition.
We then introduce a generalized re-weight method in the two-stage learning to balance the class prior.
Our approach achieves the state-of-the-art results across all four recognition tasks with a simple and unified framework.
arXiv Detail & Related papers (2021-03-30T14:09:53Z) - Deep Optimized Priors for 3D Shape Modeling and Reconstruction [38.79018852887249]
We introduce a new learning framework for 3D modeling and reconstruction.
We show that the proposed strategy effectively breaks the barriers constrained by the pre-trained priors.
arXiv Detail & Related papers (2020-12-14T03:56:31Z) - Overcoming Catastrophic Forgetting via Direction-Constrained
Optimization [43.53836230865248]
We study a new design of the optimization algorithm for training deep learning models with a fixed architecture of the classification network in a continual learning framework.
We present our direction-constrained optimization (DCO) method, where for each task we introduce a linear autoencoder to approximate its corresponding top forbidden principal directions.
We demonstrate that our algorithm performs favorably compared to other state-of-art regularization-based continual learning methods.
arXiv Detail & Related papers (2020-11-25T08:45:21Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - Towards Better Generalization: Joint Depth-Pose Learning without PoseNet [36.414471128890284]
We tackle the essential problem of scale inconsistency for self-supervised joint depth-pose learning.
Most existing methods assume that a consistent scale of depth and pose can be learned across all input samples.
We propose a novel system that explicitly disentangles scale from the network estimation.
arXiv Detail & Related papers (2020-04-03T00:28:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.