Learning Feature Decomposition for Domain Adaptive Monocular Depth
Estimation
- URL: http://arxiv.org/abs/2208.00160v1
- Date: Sat, 30 Jul 2022 08:05:35 GMT
- Title: Learning Feature Decomposition for Domain Adaptive Monocular Depth
Estimation
- Authors: Shao-Yuan Lo, Wei Wang, Jim Thomas, Jingjing Zheng, Vishal M. Patel,
Cheng-Hao Kuo
- Abstract summary: Supervised approaches have led to great success with the advance of deep learning, but they rely on large quantities of ground-truth depth annotations.
Unsupervised domain adaptation (UDA) transfers knowledge from labeled source data to unlabeled target data, so as to relax the constraint of supervised learning.
We propose a novel UDA method for MDE, referred to as Learning Feature Decomposition for Adaptation (LFDA), which learns to decompose the feature space into content and style components.
- Score: 51.15061013818216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Monocular depth estimation (MDE) has attracted intense study due to its low
cost and critical functions for robotic tasks such as localization, mapping and
obstacle detection. Supervised approaches have led to great success with the
advance of deep learning, but they rely on large quantities of ground-truth
depth annotations that are expensive to acquire. Unsupervised domain adaptation
(UDA) transfers knowledge from labeled source data to unlabeled target data, so
as to relax the constraint of supervised learning. However, existing UDA
approaches may not completely align the domain gap across different datasets
because of the domain shift problem. We believe better domain alignment can be
achieved via well-designed feature decomposition. In this paper, we propose a
novel UDA method for MDE, referred to as Learning Feature Decomposition for
Adaptation (LFDA), which learns to decompose the feature space into content and
style components. LFDA only attempts to align the content component since it
has a smaller domain gap. Meanwhile, it excludes the style component which is
specific to the source domain from training the primary task. Furthermore, LFDA
uses separate feature distribution estimations to further bridge the domain
gap. Extensive experiments on three domain adaptative MDE scenarios show that
the proposed method achieves superior accuracy and lower computational cost
compared to the state-of-the-art approaches.
Related papers
- Subject-Based Domain Adaptation for Facial Expression Recognition [51.10374151948157]
Adapting a deep learning model to a specific target individual is a challenging facial expression recognition task.
This paper introduces a new MSDA method for subject-based domain adaptation in FER.
It efficiently leverages information from multiple source subjects to adapt a deep FER model to a single target individual.
arXiv Detail & Related papers (2023-12-09T18:40:37Z) - Source-Free Domain Adaptation for Medical Image Segmentation via
Prototype-Anchored Feature Alignment and Contrastive Learning [57.43322536718131]
We present a two-stage source-free domain adaptation (SFDA) framework for medical image segmentation.
In the prototype-anchored feature alignment stage, we first utilize the weights of the pre-trained pixel-wise classifier as source prototypes.
Then, we introduce the bi-directional transport to align the target features with class prototypes by minimizing its expected cost.
arXiv Detail & Related papers (2023-07-19T06:07:12Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Multi-Source Unsupervised Domain Adaptation via Pseudo Target Domain [0.0]
Multi-source domain adaptation (MDA) aims to transfer knowledge from multiple source domains to an unlabeled target domain.
We propose a novel MDA approach, termed Pseudo Target for MDA (PTMDA)
PTMDA maps each group of source and target domains into a group-specific subspace using adversarial learning with a metric constraint.
We show that PTMDA as a whole can reduce the target error bound and leads to a better approximation of the target risk in MDA settings.
arXiv Detail & Related papers (2022-02-22T08:37:16Z) - Improving Transferability of Domain Adaptation Networks Through Domain
Alignment Layers [1.3766148734487902]
Multi-source unsupervised domain adaptation (MSDA) aims at learning a predictor for an unlabeled domain by assigning weak knowledge from a bag of source models.
We propose to embed Multi-Source version of DomaIn Alignment Layers (MS-DIAL) at different levels of the predictor.
Our approach can improve state-of-the-art MSDA methods, yielding relative gains of up to +30.64% on their classification accuracies.
arXiv Detail & Related papers (2021-09-06T18:41:19Z) - Adapting Off-the-Shelf Source Segmenter for Target Medical Image
Segmentation [12.703234995718372]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a labeled source domain to an unlabeled and unseen target domain.
Access to the source domain data at the adaptation stage is often limited, due to data storage or privacy issues.
We propose to adapt an off-the-shelf" segmentation model pre-trained in the source domain to the target domain.
arXiv Detail & Related papers (2021-06-23T16:16:55Z) - Multi-source Domain Adaptation in the Deep Learning Era: A Systematic
Survey [53.656086832255944]
Multi-source domain adaptation (MDA) is a powerful extension in which the labeled data may be collected from multiple sources.
MDA has attracted increasing attention in both academia and industry.
arXiv Detail & Related papers (2020-02-26T08:07:58Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.