Unified Multi-Domain Learning and Data Imputation using Adversarial
Autoencoder
- URL: http://arxiv.org/abs/2003.07779v1
- Date: Sun, 15 Mar 2020 19:55:07 GMT
- Title: Unified Multi-Domain Learning and Data Imputation using Adversarial
Autoencoder
- Authors: Andre Mendes, Julian Togelius, Leandro dos Santos Coelho
- Abstract summary: We present a novel framework that can combine multi-domain learning (MDL), data imputation (DI) and multi-task learning (MTL)
The core of our method is an adversarial autoencoder that can: (1) learn to produce domain-invariant embeddings to reduce the difference between domains; (2) learn the data distribution for each domain and correctly perform data imputation on missing data.
- Score: 5.933303832684138
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel framework that can combine multi-domain learning (MDL),
data imputation (DI) and multi-task learning (MTL) to improve performance for
classification and regression tasks in different domains. The core of our
method is an adversarial autoencoder that can: (1) learn to produce
domain-invariant embeddings to reduce the difference between domains; (2) learn
the data distribution for each domain and correctly perform data imputation on
missing data. For MDL, we use the Maximum Mean Discrepancy (MMD) measure to
align the domain distributions. For DI, we use an adversarial approach where a
generator fill in information for missing data and a discriminator tries to
distinguish between real and imputed values. Finally, using the universal
feature representation in the embeddings, we train a classifier using MTL that
given input from any domain, can predict labels for all domains. We demonstrate
the superior performance of our approach compared to other state-of-art methods
in three distinct settings, DG-DI in image recognition with unstructured data,
MTL-DI in grade estimation with structured data and MDMTL-DI in a selection
process using mixed data.
Related papers
- Virtual Classification: Modulating Domain-Specific Knowledge for
Multidomain Crowd Counting [67.38137379297717]
Multidomain crowd counting aims to learn a general model for multiple diverse datasets.
Deep networks prefer modeling distributions of the dominant domains instead of all domains, which is known as domain bias.
We propose a Modulating Domain-specific Knowledge Network (MDKNet) to handle the domain bias issue in multidomain crowd counting.
arXiv Detail & Related papers (2024-02-06T06:49:04Z) - Adapting Self-Supervised Representations to Multi-Domain Setups [47.03992469282679]
Current state-of-the-art self-supervised approaches, are effective when trained on individual domains but show limited generalization on unseen domains.
We propose a general-purpose, lightweight Domain Disentanglement Module that can be plugged into any self-supervised encoder.
arXiv Detail & Related papers (2023-09-07T20:05:39Z) - Maximal Domain Independent Representations Improve Transfer Learning [10.716812429325984]
Domain adaptation (DA) involves the decomposition of data representation into a domain independent representation (DIRep) and a domain dependent representation (DDRep)
We develop a new algorithm wherein a stronger constraint is imposed to minimize the DDRep by using a KL divergent loss for the DDRep in order to create the maximal DIRep that enhances transfer learning performance.
We demonstrate the equal-or-better performance of our approach against state-of-the-art algorithms by using several standard benchmark image datasets including Office.
arXiv Detail & Related papers (2023-06-01T00:46:40Z) - Unsupervised Multi-Source Domain Adaptation for Person Re-Identification [39.817734080890695]
Unsupervised domain adaptation (UDA) methods for person re-identification (re-ID) aim at transferring re-ID knowledge from labeled source data to unlabeled target data.
We introduce the multi-source concept into UDA person re-ID field, where multiple source datasets are used during training.
The proposed method outperforms state-of-the-art UDA person re-ID methods by a large margin, and even achieves comparable performance to the supervised approaches without any post-processing techniques.
arXiv Detail & Related papers (2021-04-27T03:33:35Z) - Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation [78.28390172958643]
We identify two key aspects that can help to alleviate multiple domain-shifts in the multi-target domain adaptation (MTDA)
We propose Curriculum Graph Co-Teaching (CGCT) that uses a dual classifier head, with one of them being a graph convolutional network (GCN) which aggregates features from similar samples across the domains.
When the domain labels are available, we propose Domain-aware Curriculum Learning (DCL), a sequential adaptation strategy that first adapts on the easier target domains, followed by the harder ones.
arXiv Detail & Related papers (2021-04-01T23:41:41Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Discrepancy Minimization in Domain Generalization with Generative
Nearest Neighbors [13.047289562445242]
Domain generalization (DG) deals with the problem of domain shift where a machine learning model trained on multiple-source domains fail to generalize well on a target domain with different statistics.
Multiple approaches have been proposed to solve the problem of domain generalization by learning domain invariant representations across the source domains that fail to guarantee generalization on the shifted target domain.
We propose a Generative Nearest Neighbor based Discrepancy Minimization (GNNDM) method which provides a theoretical guarantee that is upper bounded by the error in the labeling process of the target.
arXiv Detail & Related papers (2020-07-28T14:54:25Z) - Multi-Domain Spoken Language Understanding Using Domain- and Task-Aware
Parameterization [78.93669377251396]
Spoken language understanding has been addressed as a supervised learning problem, where a set of training data is available for each domain.
One existing approach solves the problem by conducting multi-domain learning, using shared parameters for joint training across domains.
We propose to improve the parameterization of this method by using domain-specific and task-specific model parameters.
arXiv Detail & Related papers (2020-04-30T15:15:40Z) - Deep Domain-Adversarial Image Generation for Domain Generalisation [115.21519842245752]
Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution.
To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains.
We propose a novel DG approach based on emphDeep Domain-Adversarial Image Generation (DDAIG)
arXiv Detail & Related papers (2020-03-12T23:17:47Z) - MADAN: Multi-source Adversarial Domain Aggregation Network for Domain
Adaptation [58.38749495295393]
Domain adaptation aims to learn a transferable model to bridge the domain shift between one labeled source domain and another sparsely labeled or unlabeled target domain.
Recent multi-source domain adaptation (MDA) methods do not consider the pixel-level alignment between sources and target.
We propose a novel MDA framework to address these challenges.
arXiv Detail & Related papers (2020-02-19T21:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.