Inferring Latent Domains for Unsupervised Deep Domain Adaptation
- URL: http://arxiv.org/abs/2103.13873v1
- Date: Thu, 25 Mar 2021 14:33:33 GMT
- Title: Inferring Latent Domains for Unsupervised Deep Domain Adaptation
- Authors: Massimiliano Mancini, Lorenzo Porzi, Samuel Rota Bul\`o, Barbara
Caputo and Elisa Ricci
- Abstract summary: Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
- Score: 54.963823285456925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised Domain Adaptation (UDA) refers to the problem of learning a
model in a target domain where labeled data are not available by leveraging
information from annotated data in a source domain. Most deep UDA approaches
operate in a single-source, single-target scenario, i.e. they assume that the
source and the target samples arise from a single distribution. However, in
practice most datasets can be regarded as mixtures of multiple domains. In
these cases, exploiting traditional single-source, single-target methods for
learning classification models may lead to poor results. Furthermore, it is
often difficult to provide the domain labels for all data points, i.e. latent
domains should be automatically discovered. This paper introduces a novel deep
architecture which addresses the problem of UDA by automatically discovering
latent domains in visual datasets and exploiting this information to learn
robust target classifiers. Specifically, our architecture is based on two main
components, i.e. a side branch that automatically computes the assignment of
each sample to its latent domain and novel layers that exploit domain
membership information to appropriately align the distribution of the CNN
internal feature representations to a reference distribution. We evaluate our
approach on publicly available benchmarks, showing that it outperforms
state-of-the-art domain adaptation methods.
Related papers
- Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - UMAD: Universal Model Adaptation under Domain and Category Shift [138.12678159620248]
Universal Model ADaptation (UMAD) framework handles both UDA scenarios without access to source data.
We develop an informative consistency score to help distinguish unknown samples from known samples.
Experiments on open-set and open-partial-set UDA scenarios demonstrate that UMAD exhibits comparable, if not superior, performance to state-of-the-art data-dependent methods.
arXiv Detail & Related papers (2021-12-16T01:22:59Z) - Learning causal representations for robust domain adaptation [31.261956776418618]
In many real-world applications, target domain data may not always be available.
In this paper, we study the cases where at the training phase the target domain data is unavailable.
We propose a novel Causal AutoEncoder (CAE), which integrates deep autoencoder and causal structure learning into a unified model.
arXiv Detail & Related papers (2020-11-12T11:24:03Z) - Unsupervised Model Adaptation for Continual Semantic Segmentation [15.820660013260584]
We develop an algorithm for adapting a semantic segmentation model that is trained using a labeled source domain to generalize well in an unlabeled target domain.
We provide theoretical analysis and explain conditions under which our algorithm is effective.
Experiments on benchmark adaptation task demonstrate our method achieves competitive performance even compared with joint UDA approaches.
arXiv Detail & Related papers (2020-09-26T04:55:50Z) - Learning to Cluster under Domain Shift [20.00056591000625]
In this work we address the problem of transferring knowledge from a source to a target domain when both source and target data have no annotations.
Inspired by recent works on deep clustering, our approach leverages information from data gathered from multiple source domains.
We show that our method is able to automatically discover relevant semantic information even in presence of few target samples.
arXiv Detail & Related papers (2020-08-11T12:03:01Z) - Deep Domain-Adversarial Image Generation for Domain Generalisation [115.21519842245752]
Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution.
To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains.
We propose a novel DG approach based on emphDeep Domain-Adversarial Image Generation (DDAIG)
arXiv Detail & Related papers (2020-03-12T23:17:47Z) - Multi-source Domain Adaptation for Visual Sentiment Classification [92.53780541232773]
We propose a novel multi-source domain adaptation (MDA) method, termed Multi-source Sentiment Generative Adversarial Network (MSGAN)
To handle data from multiple source domains, MSGAN learns to find a unified sentiment latent space where data from both the source and target domains share a similar distribution.
Extensive experiments conducted on four benchmark datasets demonstrate that MSGAN significantly outperforms the state-of-the-art MDA approaches for visual sentiment classification.
arXiv Detail & Related papers (2020-01-12T08:37:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.