Discriminator-free Unsupervised Domain Adaptation for Multi-label Image
Classification
- URL: http://arxiv.org/abs/2301.10611v3
- Date: Wed, 8 Nov 2023 13:29:57 GMT
- Title: Discriminator-free Unsupervised Domain Adaptation for Multi-label Image
Classification
- Authors: Indel Pal Singh, Enjie Ghorbel, Anis Kacem, Arunkumar Rathinam and
Djamila Aouada
- Abstract summary: A discriminator-free adversarial-based Unminator Domain Adaptation (UDA) for Multi-Label Image Classification (MLIC) is proposed.
The proposed method is evaluated on several multi-label image datasets covering three different types of domain shift.
- Score: 11.825795835537324
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, a discriminator-free adversarial-based Unsupervised Domain
Adaptation (UDA) for Multi-Label Image Classification (MLIC) referred to as
DDA-MLIC is proposed. Recently, some attempts have been made for introducing
adversarial-based UDA methods in the context of MLIC. However, these methods
which rely on an additional discriminator subnet present one major shortcoming.
The learning of domain-invariant features may harm their task-specific
discriminative power, since the classification and discrimination tasks are
decoupled. Herein, we propose to overcome this issue by introducing a novel
adversarial critic that is directly deduced from the task-specific classifier.
Specifically, a two-component Gaussian Mixture Model (GMM) is fitted on the
source and target predictions in order to distinguish between two clusters.
This allows extracting a Gaussian distribution for each component. The
resulting Gaussian distributions are then used for formulating an adversarial
loss based on a Frechet distance. The proposed method is evaluated on several
multi-label image datasets covering three different types of domain shift. The
obtained results demonstrate that DDA-MLIC outperforms existing
state-of-the-art methods in terms of precision while requiring a lower number
of parameters. The code is publicly available at github.com/cvi2snt/DDA-MLIC.
Related papers
- ProtoGMM: Multi-prototype Gaussian-Mixture-based Domain Adaptation Model for Semantic Segmentation [0.8213829427624407]
Domain adaptive semantic segmentation aims to generate accurate and dense predictions for an unlabeled target domain.
We propose the ProtoGMM model, which incorporates the GMM into contrastive losses to perform guided contrastive learning.
To achieve increased intra-class semantic similarity, decreased inter-class similarity, and domain alignment between the source and target domains, we employ multi-prototype contrastive learning.
arXiv Detail & Related papers (2024-06-27T14:50:50Z) - Generative Model Based Noise Robust Training for Unsupervised Domain
Adaptation [108.11783463263328]
This paper proposes a Generative model-based Noise-Robust Training method (GeNRT)
It eliminates domain shift while mitigating label noise.
Experiments on Office-Home, PACS, and Digit-Five show that our GeNRT achieves comparable performance to state-of-the-art methods.
arXiv Detail & Related papers (2023-03-10T06:43:55Z) - Robust Domain Adaptive Object Detection with Unified Multi-Granularity Alignment [59.831917206058435]
Domain adaptive detection aims to improve the generalization of detectors on target domain.
Recent approaches achieve domain adaption through feature alignment in different granularities via adversarial learning.
We introduce a unified multi-granularity alignment (MGA)-based detection framework for domain-invariant feature learning.
arXiv Detail & Related papers (2023-01-01T08:38:07Z) - Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation [55.27563366506407]
We introduce a discriminator-free adversarial learning network (DALN) for unsupervised domain adaptation (UDA)
DALN achieves explicit domain alignment and category distinguishment through a unified objective.
DALN compares favorably against the existing state-of-the-art (SOTA) methods on a variety of public datasets.
arXiv Detail & Related papers (2022-04-08T04:40:18Z) - Cross-Domain Gradient Discrepancy Minimization for Unsupervised Domain
Adaptation [22.852237073492894]
Unsupervised Domain Adaptation (UDA) aims to generalize the knowledge learned from a well-labeled source domain to an unlabeled target domain.
We propose a cross-domain discrepancy minimization (CGDM) method which explicitly minimizes the discrepancy of gradients generated by source samples and target samples.
In order to compute the gradient signal of target samples, we further obtain target pseudo labels through a clustering-based self-supervised learning.
arXiv Detail & Related papers (2021-06-08T07:35:40Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Adversarial Dual Distinct Classifiers for Unsupervised Domain Adaptation [67.83872616307008]
Unversarial Domain adaptation (UDA) attempts to recognize the unlabeled target samples by building a learning model from a differently-distributed labeled source domain.
In this paper, we propose a novel Adrial Dual Distincts Network (AD$2$CN) to align the source and target domain data distribution simultaneously with matching task-specific category boundaries.
To be specific, a domain-invariant feature generator is exploited to embed the source and target data into a latent common space with the guidance of discriminative cross-domain alignment.
arXiv Detail & Related papers (2020-08-27T01:29:10Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - Target-Independent Domain Adaptation for WBC Classification using
Generative Latent Search [20.199195698983715]
Unsupervised Domain Adaptation (UDA) techniques presuppose the existence of sufficient amount of unlabelled target data.
We propose a method for UDA that is devoid of the need for target data.
We prove the existence of such a clone given that infinite number of data points can be sampled from the source distribution.
arXiv Detail & Related papers (2020-05-11T20:58:23Z) - MiniMax Entropy Network: Learning Category-Invariant Features for Domain Adaptation [29.43532067090422]
We propose an easy-to-implement method dubbed MiniMax Entropy Networks (MMEN) based on adversarial learning.
Unlike most existing approaches which employ a generator to deal with domain difference, MMEN focuses on learning the categorical information from unlabeled target samples.
arXiv Detail & Related papers (2019-04-21T13:39:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.