Domain Adaptive Monocular Depth Estimation With Semantic Information
- URL: http://arxiv.org/abs/2104.05764v1
- Date: Mon, 12 Apr 2021 18:50:41 GMT
- Title: Domain Adaptive Monocular Depth Estimation With Semantic Information
- Authors: Fei Lu, Hyeonwoo Yu, Jean Oh
- Abstract summary: We propose an adversarial training model that leverages semantic information to narrow the domain gap.
The proposed compact model achieves state-of-the-art performance comparable to complex latest models.
- Score: 13.387521845596149
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The advent of deep learning has brought an impressive advance to monocular
depth estimation, e.g., supervised monocular depth estimation has been
thoroughly investigated. However, the large amount of the RGB-to-depth dataset
may not be always available since collecting accurate depth ground truth
according to the RGB image is a time-consuming and expensive task. Although the
network can be trained on an alternative dataset to overcome the dataset scale
problem, the trained model is hard to generalize to the target domain due to
the domain discrepancy. Adversarial domain alignment has demonstrated its
efficacy to mitigate the domain shift on simple image classification tasks in
previous works. However, traditional approaches hardly handle the conditional
alignment as they solely consider the feature map of the network. In this
paper, we propose an adversarial training model that leverages semantic
information to narrow the domain gap. Based on the experiments conducted on the
datasets for the monocular depth estimation task including KITTI and
Cityscapes, the proposed compact model achieves state-of-the-art performance
comparable to complex latest models and shows favorable results on boundaries
and objects at far distances.
Related papers
- TanDepth: Leveraging Global DEMs for Metric Monocular Depth Estimation in UAVs [5.6168844664788855]
This work presents TanDepth, a practical, online scale recovery method for obtaining metric depth results from relative estimations at inference-time.
Tailored for Unmanned Aerial Vehicle (UAV) applications, our method leverages sparse measurements from Global Digital Elevation Models (GDEM) by projecting them to the camera view.
An adaptation to the Cloth Simulation Filter is presented, which allows selecting ground points from the estimated depth map to then correlate with the projected reference points.
arXiv Detail & Related papers (2024-09-08T15:54:43Z) - Consistency Regularisation for Unsupervised Domain Adaptation in Monocular Depth Estimation [15.285720572043678]
We formulate unsupervised domain adaptation for monocular depth estimation as a consistency-based semi-supervised learning problem.
We introduce a pairwise loss function that regularises predictions on the source domain while enforcing consistency across multiple augmented views.
In our experiments, we rely on the standard depth estimation benchmarks KITTI and NYUv2 to demonstrate state-of-the-art results.
arXiv Detail & Related papers (2024-05-27T23:32:06Z) - Do More With What You Have: Transferring Depth-Scale from Labeled to Unlabeled Domains [43.16293941978469]
Self-supervised depth estimators result in up-to-scale predictions that are linearly correlated to their absolute depth values across the domain.
We show that aligning the field-of-view of two datasets prior to training results in a common linear relationship for both domains.
We use this observed property to transfer the depth-scale from source datasets that have absolute depth labels to new target datasets that lack these measurements.
arXiv Detail & Related papers (2023-03-14T07:07:34Z) - SC-DepthV3: Robust Self-supervised Monocular Depth Estimation for
Dynamic Scenes [58.89295356901823]
Self-supervised monocular depth estimation has shown impressive results in static scenes.
It relies on the multi-view consistency assumption for training networks, however, that is violated in dynamic object regions.
We introduce an external pretrained monocular depth estimation model for generating single-image depth prior.
Our model can predict sharp and accurate depth maps, even when training from monocular videos of highly-dynamic scenes.
arXiv Detail & Related papers (2022-11-07T16:17:47Z) - Occlusion-Aware Self-Supervised Monocular 6D Object Pose Estimation [88.8963330073454]
We propose a novel monocular 6D pose estimation approach by means of self-supervised learning.
We leverage current trends in noisy student training and differentiable rendering to further self-supervise the model.
Our proposed self-supervision outperforms all other methods relying on synthetic data.
arXiv Detail & Related papers (2022-03-19T15:12:06Z) - Occlusion-aware Unsupervised Learning of Depth from 4-D Light Fields [50.435129905215284]
We present an unsupervised learning-based depth estimation method for 4-D light field processing and analysis.
Based on the basic knowledge of the unique geometry structure of light field data, we explore the angular coherence among subsets of the light field views to estimate depth maps.
Our method can significantly shrink the performance gap between the previous unsupervised method and supervised ones, and produce depth maps with comparable accuracy to traditional methods with obviously reduced computational cost.
arXiv Detail & Related papers (2021-06-06T06:19:50Z) - Domain Adaptive Semantic Segmentation with Self-Supervised Depth
Estimation [84.34227665232281]
Domain adaptation for semantic segmentation aims to improve the model performance in the presence of a distribution shift between source and target domain.
We leverage the guidance from self-supervised depth estimation, which is available on both domains, to bridge the domain gap.
We demonstrate the effectiveness of our proposed approach on the benchmark tasks SYNTHIA-to-Cityscapes and GTA-to-Cityscapes.
arXiv Detail & Related papers (2021-04-28T07:47:36Z) - Learning a Domain-Agnostic Visual Representation for Autonomous Driving
via Contrastive Loss [25.798361683744684]
Domain-Agnostic Contrastive Learning (DACL) is a two-stage unsupervised domain adaptation framework with cyclic adversarial training and contrastive loss.
Our proposed approach achieves better performance in the monocular depth estimation task compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-10T07:06:03Z) - DESC: Domain Adaptation for Depth Estimation via Semantic Consistency [24.13837264978472]
We propose a domain adaptation approach to train a monocular depth estimation model.
We bridge the domain gap by leveraging semantic predictions and low-level edge features.
Our approach is evaluated on standard domain adaptation benchmarks for monocular depth estimation.
arXiv Detail & Related papers (2020-09-03T10:54:05Z) - DeFeat-Net: General Monocular Depth via Simultaneous Unsupervised
Representation Learning [65.94499390875046]
DeFeat-Net is an approach to simultaneously learn a cross-domain dense feature representation.
Our technique is able to outperform the current state-of-the-art with around 10% reduction in all error measures.
arXiv Detail & Related papers (2020-03-30T13:10:32Z) - Single Image Depth Estimation Trained via Depth from Defocus Cues [105.67073923825842]
Estimating depth from a single RGB image is a fundamental task in computer vision.
In this work, we rely, instead of different views, on depth from focus cues.
We present results that are on par with supervised methods on KITTI and Make3D datasets and outperform unsupervised learning approaches.
arXiv Detail & Related papers (2020-01-14T20:22:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.