Self-Supervised Learning of Domain Invariant Features for Depth
Estimation
- URL: http://arxiv.org/abs/2106.02594v3
- Date: Tue, 8 Jun 2021 09:02:07 GMT
- Title: Self-Supervised Learning of Domain Invariant Features for Depth
Estimation
- Authors: Hiroyasu Akada, Shariq Farooq Bhat, Ibraheem Alhashim, Peter Wonka
- Abstract summary: We tackle the problem of unsupervised synthetic-to-realistic domain adaptation for single image depth estimation.
An essential building block of single image depth estimation is an encoder-decoder task network that takes RGB images as input and produces depth maps as output.
We propose a novel training strategy to force the task network to learn domain invariant representations in a self-supervised manner.
- Score: 35.74969527929284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We tackle the problem of unsupervised synthetic-to-realistic domain
adaptation for single image depth estimation. An essential building block of
single image depth estimation is an encoder-decoder task network that takes RGB
images as input and produces depth maps as output. In this paper, we propose a
novel training strategy to force the task network to learn domain invariant
representations in a self-supervised manner. Specifically, we extend
self-supervised learning from traditional representation learning, which works
on images from a single domain, to domain invariant representation learning,
which works on images from two different domains by utilizing an image-to-image
translation network. Firstly, we use our bidirectional image-to-image
translation network to transfer domain-specific styles between synthetic and
real domains. This style transfer operation allows us to obtain similar images
from the different domains. Secondly, we jointly train our task network and
Siamese network with the same images from the different domains to obtain
domain invariance for the task network. Finally, we fine-tune the task network
using labeled synthetic and unlabeled real-world data. Our training strategy
yields improved generalization capability in the real-world domain. We carry
out an extensive evaluation on two popular datasets for depth estimation, KITTI
and Make3D. The results demonstrate that our proposed method outperforms the
state-of-the-art both qualitatively and quantitatively. The source code and
model weights will be made available.
Related papers
- Domain-Controlled Prompt Learning [49.45309818782329]
Existing prompt learning methods often lack domain-awareness or domain-transfer mechanisms.
We propose a textbfDomain-Controlled Prompt Learning for the specific domains.
Our method achieves state-of-the-art performance in specific domain image recognition datasets.
arXiv Detail & Related papers (2023-09-30T02:59:49Z) - Domain-Scalable Unpaired Image Translation via Latent Space Anchoring [88.7642967393508]
Unpaired image-to-image translation (UNIT) aims to map images between two visual domains without paired training data.
We propose a new domain-scalable UNIT method, termed as latent space anchoring.
Our method anchors images of different domains to the same latent space of frozen GANs by learning lightweight encoder and regressor models.
In the inference phase, the learned encoders and decoders of different domains can be arbitrarily combined to translate images between any two domains without fine-tuning.
arXiv Detail & Related papers (2023-06-26T17:50:02Z) - ACE: Zero-Shot Image to Image Translation via Pretrained
Auto-Contrastive-Encoder [2.1874189959020427]
We propose a new approach to extract image features by learning the similarities and differences of samples within the same data distribution.
The design of ACE enables us to achieve zero-shot image-to-image translation with no training on image translation tasks for the first time.
Our model achieves competitive results on multimodal image translation tasks with zero-shot learning as well.
arXiv Detail & Related papers (2023-02-22T23:52:23Z) - Domain Invariant Masked Autoencoders for Self-supervised Learning from
Multi-domains [73.54897096088149]
We propose a Domain-invariant Masked AutoEncoder (DiMAE) for self-supervised learning from multi-domains.
The core idea is to augment the input image with style noise from different domains and then reconstruct the image from the embedding of the augmented image.
Experiments on PACS and DomainNet illustrate that DiMAE achieves considerable gains compared with recent state-of-the-art methods.
arXiv Detail & Related papers (2022-05-10T09:49:40Z) - Leveraging in-domain supervision for unsupervised image-to-image
translation tasks via multi-stream generators [4.726777092009554]
We introduce two techniques to incorporate this invaluable in-domain prior knowledge for the benefit of translation quality.
We propose splitting the input data according to semantic masks, explicitly guiding the network to different behavior for the different regions of the image.
In addition, we propose training a semantic segmentation network along with the translation task, and to leverage this output as a loss term that improves robustness.
arXiv Detail & Related papers (2021-12-30T15:29:36Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - DoFE: Domain-oriented Feature Embedding for Generalizable Fundus Image
Segmentation on Unseen Datasets [96.92018649136217]
We present a novel Domain-oriented Feature Embedding (DoFE) framework to improve the generalization ability of CNNs on unseen target domains.
Our DoFE framework dynamically enriches the image features with additional domain prior knowledge learned from multi-source domains.
Our framework generates satisfying segmentation results on unseen datasets and surpasses other domain generalization and network regularization methods.
arXiv Detail & Related papers (2020-10-13T07:28:39Z) - Crossing-Domain Generative Adversarial Networks for Unsupervised
Multi-Domain Image-to-Image Translation [12.692904507625036]
We propose a general framework for unsupervised image-to-image translation across multiple domains.
Our proposed framework consists of a pair of encoders along with a pair of GANs which learns high-level features across different domains to generate diverse and realistic samples from.
arXiv Detail & Related papers (2020-08-27T01:54:07Z) - Domain Adaptation with Morphologic Segmentation [8.0698976170854]
We present a novel domain adaptation framework that uses morphologic segmentation to translate images from arbitrary input domains (real and synthetic) into a uniform output domain.
Our goal is to establish a preprocessing step that unifies data from multiple sources into a common representation.
We showcase the effectiveness of our approach by qualitatively and quantitatively evaluating our method on four data sets of simulated and real data of urban scenes.
arXiv Detail & Related papers (2020-06-16T17:06:02Z) - SharinGAN: Combining Synthetic and Real Data for Unsupervised Geometry
Estimation [18.29202999419042]
We propose a novel method for combining synthetic and real images when training networks.
We suggest a method for mapping both image types into a single, shared domain.
Our experiments demonstrate significant improvements over the state-of-the-art in two important domains.
arXiv Detail & Related papers (2020-06-07T02:45:33Z) - CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency [119.45667331836583]
Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another.
We present a novel pixel-wise adversarial domain adaptation algorithm.
arXiv Detail & Related papers (2020-01-09T19:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.