Domain Adversarial Active Learning for Domain Generalization
Classification
- URL: http://arxiv.org/abs/2403.06174v1
- Date: Sun, 10 Mar 2024 10:59:22 GMT
- Title: Domain Adversarial Active Learning for Domain Generalization
Classification
- Authors: Jianting Chen, Ling Ding, Yunxiao Yang, Zaiyuan Di, and Yang Xiang
- Abstract summary: Domain generalization models aim to learn cross-domain knowledge from source domain data, to improve performance on unknown target domains.
Recent research has demonstrated that diverse and rich source domain samples can enhance domain generalization capability.
We propose a domain-adversarial active learning (DAAL) algorithm for classification tasks in domain generalization.
- Score: 8.003401798449337
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization models aim to learn cross-domain knowledge from source
domain data, to improve performance on unknown target domains. Recent research
has demonstrated that diverse and rich source domain samples can enhance domain
generalization capability. This paper argues that the impact of each sample on
the model's generalization ability varies. Despite its small scale, a
high-quality dataset can still attain a certain level of generalization
ability. Motivated by this, we propose a domain-adversarial active learning
(DAAL) algorithm for classification tasks in domain generalization. First, we
analyze that the objective of tasks is to maximize the inter-class distance
within the same domain and minimize the intra-class distance across different
domains. To achieve this objective, we design a domain adversarial selection
method that prioritizes challenging samples. Second, we posit that even in a
converged model, there are subsets of features that lack discriminatory power
within each domain. We attempt to identify these feature subsets and optimize
them by a constraint loss. We validate and analyze our DAAL algorithm on
multiple domain generalization datasets, comparing it with various domain
generalization algorithms and active learning algorithms. Our results
demonstrate that the DAAL algorithm can achieve strong generalization ability
with fewer data resources, thereby reducing data annotation costs in domain
generalization tasks.
Related papers
- Revisiting the Domain Shift and Sample Uncertainty in Multi-source
Active Domain Transfer [69.82229895838577]
Active Domain Adaptation (ADA) aims to maximally boost model adaptation in a new target domain by actively selecting a limited number of target data to annotate.
This setting neglects the more practical scenario where training data are collected from multiple sources.
This motivates us to target a new and challenging setting of knowledge transfer that extends ADA from a single source domain to multiple source domains.
arXiv Detail & Related papers (2023-11-21T13:12:21Z) - Domain-aware Triplet loss in Domain Generalization [0.0]
Domain shift is caused by discrepancies in the distributions of the testing and training data.
We design a domainaware triplet loss for domain generalization to help the model to cluster similar semantic features.
Our algorithm is designed to disperse domain information in the embedding space.
arXiv Detail & Related papers (2023-03-01T14:02:01Z) - Domain Adaptation Principal Component Analysis: base linear method for
learning with out-of-distribution data [55.41644538483948]
Domain adaptation is a popular paradigm in modern machine learning.
We present a method called Domain Adaptation Principal Component Analysis (DAPCA)
DAPCA finds a linear reduced data representation useful for solving the domain adaptation task.
arXiv Detail & Related papers (2022-08-28T21:10:56Z) - Improving Multi-Domain Generalization through Domain Re-labeling [31.636953426159224]
We study the important link between pre-specified domain labels and the generalization performance.
We introduce a general approach for multi-domain generalization, MulDEns, that uses an ERM-based deep ensembling backbone.
We show that MulDEns does not require tailoring the augmentation strategy or the training process specific to a dataset.
arXiv Detail & Related papers (2021-12-17T23:21:50Z) - Failure Modes of Domain Generalization Algorithms [29.772370301145543]
We propose an evaluation framework for domain generalization algorithms.
We show that the largest contributor to the generalization error varies across methods, datasets, regularization strengths and even training lengths.
arXiv Detail & Related papers (2021-11-26T20:04:19Z) - Adaptive Domain-Specific Normalization for Generalizable Person
Re-Identification [81.30327016286009]
We propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
In this work, we propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
arXiv Detail & Related papers (2021-05-07T02:54:55Z) - Open Domain Generalization with Domain-Augmented Meta-Learning [83.59952915761141]
We study a novel and practical problem of Open Domain Generalization (OpenDG)
We propose a Domain-Augmented Meta-Learning framework to learn open-domain generalizable representations.
Experiment results on various multi-domain datasets demonstrate that the proposed Domain-Augmented Meta-Learning (DAML) outperforms prior methods for unseen domain recognition.
arXiv Detail & Related papers (2021-04-08T09:12:24Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Discrepancy Minimization in Domain Generalization with Generative
Nearest Neighbors [13.047289562445242]
Domain generalization (DG) deals with the problem of domain shift where a machine learning model trained on multiple-source domains fail to generalize well on a target domain with different statistics.
Multiple approaches have been proposed to solve the problem of domain generalization by learning domain invariant representations across the source domains that fail to guarantee generalization on the shifted target domain.
We propose a Generative Nearest Neighbor based Discrepancy Minimization (GNNDM) method which provides a theoretical guarantee that is upper bounded by the error in the labeling process of the target.
arXiv Detail & Related papers (2020-07-28T14:54:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.