MiddleGAN: Generate Domain Agnostic Samples for Unsupervised Domain
Adaptation
- URL: http://arxiv.org/abs/2211.03144v1
- Date: Sun, 6 Nov 2022 15:09:36 GMT
- Title: MiddleGAN: Generate Domain Agnostic Samples for Unsupervised Domain
Adaptation
- Authors: Ye Gao, Zhendong Chu, Hongning Wang, John Stankovic
- Abstract summary: We propose to let the classifier that performs the final classification task on the target domain learn implicitly the invariant features to perform classification.
It is achieved via feeding the classifier during training generated fake samples that are similar to samples from both the source and target domains.
We propose a novel variation of generative adversarial networks (GAN), called the MiddleGAN, that generates fake samples that are similar to samples from both the source and target domains.
- Score: 35.00283311401667
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, machine learning has achieved impressive results across
different application areas. However, machine learning algorithms do not
necessarily perform well on a new domain with a different distribution than its
training set. Domain Adaptation (DA) is used to mitigate this problem. One
approach of existing DA algorithms is to find domain invariant features whose
distributions in the source domain are the same as their distribution in the
target domain. In this paper, we propose to let the classifier that performs
the final classification task on the target domain learn implicitly the
invariant features to perform classification. It is achieved via feeding the
classifier during training generated fake samples that are similar to samples
from both the source and target domains. We call these generated samples
domain-agnostic samples. To accomplish this we propose a novel variation of
generative adversarial networks (GAN), called the MiddleGAN, that generates
fake samples that are similar to samples from both the source and target
domains, using two discriminators and one generator. We extend the theory of
GAN to show that there exist optimal solutions for the parameters of the two
discriminators and one generator in MiddleGAN, and empirically show that the
samples generated by the MiddleGAN are similar to both samples from the source
domain and samples from the target domain. We conducted extensive evaluations
using 24 benchmarks; on the 24 benchmarks, we compare MiddleGAN against various
state-of-the-art algorithms and outperform the state-of-the-art by up to 20.1\%
on certain benchmarks.
Related papers
- ProtoGMM: Multi-prototype Gaussian-Mixture-based Domain Adaptation Model for Semantic Segmentation [0.8213829427624407]
Domain adaptive semantic segmentation aims to generate accurate and dense predictions for an unlabeled target domain.
We propose the ProtoGMM model, which incorporates the GMM into contrastive losses to perform guided contrastive learning.
To achieve increased intra-class semantic similarity, decreased inter-class similarity, and domain alignment between the source and target domains, we employ multi-prototype contrastive learning.
arXiv Detail & Related papers (2024-06-27T14:50:50Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Generation, augmentation, and alignment: A pseudo-source domain based
method for source-free domain adaptation [2.774526723254576]
Methods need to access both labeled source samples and unlabeled target samples simultaneously to train the model.
In this paper, inspired by this observation, we propose a novel method based on the pseudo-source domain.
The results on three real-world datasets verify the effectiveness of the proposed method.
arXiv Detail & Related papers (2021-09-09T03:21:58Z) - Cross-Domain Gradient Discrepancy Minimization for Unsupervised Domain
Adaptation [22.852237073492894]
Unsupervised Domain Adaptation (UDA) aims to generalize the knowledge learned from a well-labeled source domain to an unlabeled target domain.
We propose a cross-domain discrepancy minimization (CGDM) method which explicitly minimizes the discrepancy of gradients generated by source samples and target samples.
In order to compute the gradient signal of target samples, we further obtain target pseudo labels through a clustering-based self-supervised learning.
arXiv Detail & Related papers (2021-06-08T07:35:40Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Curriculum CycleGAN for Textual Sentiment Domain Adaptation with
Multiple Sources [68.31273535702256]
We propose a novel instance-level MDA framework, named curriculum cycle-consistent generative adversarial network (C-CycleGAN)
C-CycleGAN consists of three components: (1) pre-trained text encoder which encodes textual input from different domains into a continuous representation space, (2) intermediate domain generator with curriculum instance-level adaptation which bridges the gap across source and target domains, and (3) task classifier trained on the intermediate domain for final sentiment classification.
We conduct extensive experiments on three benchmark datasets and achieve substantial gains over state-of-the-art DA approaches.
arXiv Detail & Related papers (2020-11-17T14:50:55Z) - Adversarial Dual Distinct Classifiers for Unsupervised Domain Adaptation [67.83872616307008]
Unversarial Domain adaptation (UDA) attempts to recognize the unlabeled target samples by building a learning model from a differently-distributed labeled source domain.
In this paper, we propose a novel Adrial Dual Distincts Network (AD$2$CN) to align the source and target domain data distribution simultaneously with matching task-specific category boundaries.
To be specific, a domain-invariant feature generator is exploited to embed the source and target data into a latent common space with the guidance of discriminative cross-domain alignment.
arXiv Detail & Related papers (2020-08-27T01:29:10Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.