Domain Adaptation for Multi-label Image Classification: a Discriminator-free Approach
- URL: http://arxiv.org/abs/2505.14333v1
- Date: Tue, 20 May 2025 13:18:20 GMT
- Title: Domain Adaptation for Multi-label Image Classification: a Discriminator-free Approach
- Authors: Inder Pal Singh, Enjie Ghorbel, Anis Kacem, Djamila Aouada,
- Abstract summary: This paper introduces a discriminatorfree adversarial-based approach termed DDAMLIC for Unsupervised Image Classification (UDA)<n>We employ a two-component Gaussian Mixture Model (GMM) to model both source and target predictions, distinguishing between two distinct clusters.<n>The proposed framework is therefore not only fully differentiable but is also cost-effective as it avoids the expensive iterative process usually induced by the standard EM method.
- Score: 11.075823608370909
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a discriminator-free adversarial-based approach termed DDA-MLIC for Unsupervised Domain Adaptation (UDA) in the context of Multi-Label Image Classification (MLIC). While recent efforts have explored adversarial-based UDA methods for MLIC, they typically include an additional discriminator subnet. Nevertheless, decoupling the classification and the discrimination tasks may harm their task-specific discriminative power. Herein, we address this challenge by presenting a novel adversarial critic directly derived from the task-specific classifier. Specifically, we employ a two-component Gaussian Mixture Model (GMM) to model both source and target predictions, distinguishing between two distinct clusters. Instead of using the traditional Expectation Maximization (EM) algorithm, our approach utilizes a Deep Neural Network (DNN) to estimate the parameters of each GMM component. Subsequently, the source and target GMM parameters are leveraged to formulate an adversarial loss using the Fr\'echet distance. The proposed framework is therefore not only fully differentiable but is also cost-effective as it avoids the expensive iterative process usually induced by the standard EM method. The proposed method is evaluated on several multi-label image datasets covering three different types of domain shift. The obtained results demonstrate that DDA-MLIC outperforms existing state-of-the-art methods in terms of precision while requiring a lower number of parameters. The code is made publicly available at github.com/cvi2snt/DDA-MLIC.
Related papers
- ProtoGMM: Multi-prototype Gaussian-Mixture-based Domain Adaptation Model for Semantic Segmentation [0.8213829427624407]
Domain adaptive semantic segmentation aims to generate accurate and dense predictions for an unlabeled target domain.
We propose the ProtoGMM model, which incorporates the GMM into contrastive losses to perform guided contrastive learning.
To achieve increased intra-class semantic similarity, decreased inter-class similarity, and domain alignment between the source and target domains, we employ multi-prototype contrastive learning.
arXiv Detail & Related papers (2024-06-27T14:50:50Z) - Adaptive Face Recognition Using Adversarial Information Network [57.29464116557734]
Face recognition models often degenerate when training data are different from testing data.
We propose a novel adversarial information network (AIN) to address it.
arXiv Detail & Related papers (2023-05-23T02:14:11Z) - Algorithm-Dependent Bounds for Representation Learning of Multi-Source
Domain Adaptation [7.6249291891777915]
We use information-theoretic tools to derive a novel analysis of Multi-source Domain Adaptation (MDA) from the representation learning perspective.
We propose a novel deep MDA algorithm, implicitly addressing the target shift through joint alignment.
The proposed algorithm has comparable performance to the state-of-the-art on target-shifted MDA benchmark with improved memory efficiency.
arXiv Detail & Related papers (2023-04-04T18:32:20Z) - Self-Paced Learning for Open-Set Domain Adaptation [50.620824701934]
Traditional domain adaptation methods presume that the classes in the source and target domains are identical.
Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain.
We propose a novel framework based on self-paced learning to distinguish common and unknown class samples.
arXiv Detail & Related papers (2023-03-10T14:11:09Z) - Discriminator-free Unsupervised Domain Adaptation for Multi-label Image
Classification [11.825795835537324]
A discriminator-free adversarial-based Unminator Domain Adaptation (UDA) for Multi-Label Image Classification (MLIC) is proposed.
The proposed method is evaluated on several multi-label image datasets covering three different types of domain shift.
arXiv Detail & Related papers (2023-01-25T14:45:13Z) - Robust Domain Adaptive Object Detection with Unified Multi-Granularity Alignment [59.831917206058435]
Domain adaptive detection aims to improve the generalization of detectors on target domain.
Recent approaches achieve domain adaption through feature alignment in different granularities via adversarial learning.
We introduce a unified multi-granularity alignment (MGA)-based detection framework for domain-invariant feature learning.
arXiv Detail & Related papers (2023-01-01T08:38:07Z) - ADPS: Asymmetric Distillation Post-Segmentation for Image Anomaly
Detection [75.68023968735523]
Knowledge Distillation-based Anomaly Detection (KDAD) methods rely on the teacher-student paradigm to detect and segment anomalous regions.
We propose an innovative approach called Asymmetric Distillation Post-Segmentation (ADPS)
Our ADPS employs an asymmetric distillation paradigm that takes distinct forms of the same image as the input of the teacher-student networks.
We show that ADPS significantly improves Average Precision (AP) metric by 9% and 20% on the MVTec AD and KolektorSDD2 datasets.
arXiv Detail & Related papers (2022-10-19T12:04:47Z) - Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation [55.27563366506407]
We introduce a discriminator-free adversarial learning network (DALN) for unsupervised domain adaptation (UDA)
DALN achieves explicit domain alignment and category distinguishment through a unified objective.
DALN compares favorably against the existing state-of-the-art (SOTA) methods on a variety of public datasets.
arXiv Detail & Related papers (2022-04-08T04:40:18Z) - Uncertainty-aware Clustering for Unsupervised Domain Adaptive Object
Re-identification [123.75412386783904]
State-of-the-art object Re-ID approaches adopt clustering algorithms to generate pseudo-labels for the unlabeled target domain.
We propose an uncertainty-aware clustering framework (UCF) for UDA tasks.
Our UCF method consistently achieves state-of-the-art performance in multiple UDA tasks for object Re-ID.
arXiv Detail & Related papers (2021-08-22T09:57:14Z) - Cross-Domain Gradient Discrepancy Minimization for Unsupervised Domain
Adaptation [22.852237073492894]
Unsupervised Domain Adaptation (UDA) aims to generalize the knowledge learned from a well-labeled source domain to an unlabeled target domain.
We propose a cross-domain discrepancy minimization (CGDM) method which explicitly minimizes the discrepancy of gradients generated by source samples and target samples.
In order to compute the gradient signal of target samples, we further obtain target pseudo labels through a clustering-based self-supervised learning.
arXiv Detail & Related papers (2021-06-08T07:35:40Z) - Generalizable Representation Learning for Mixture Domain Face
Anti-Spoofing [53.82826073959756]
Face anti-spoofing approach based on domain generalization(DG) has drawn growing attention due to its robustness forunseen scenarios.
We propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
To overcome the limitation, we propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
arXiv Detail & Related papers (2021-05-06T06:04:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.