DICS: Find Domain-Invariant and Class-Specific Features for Out-of-Distribution Generalization
- URL: http://arxiv.org/abs/2409.08557v1
- Date: Fri, 13 Sep 2024 06:20:21 GMT
- Title: DICS: Find Domain-Invariant and Class-Specific Features for Out-of-Distribution Generalization
- Authors: Qiaowei Miao, Yawei Luo, Yi Yang,
- Abstract summary: In vision tasks, both domain-related and class-shared features act as confounders that hinder generalization.
We propose a DICS model to extract Domain-Invariant and Class-Specific features.
DICS effectively identifies the key features of each class in target domains.
- Score: 26.382349137191547
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While deep neural networks have made remarkable progress in various vision tasks, their performance typically deteriorates when tested in out-of-distribution (OOD) scenarios. Many OOD methods focus on extracting domain-invariant features but neglect whether these features are unique to each class. Even if some features are domain-invariant, they cannot serve as key classification criteria if shared across different classes. In OOD tasks, both domain-related and class-shared features act as confounders that hinder generalization. In this paper, we propose a DICS model to extract Domain-Invariant and Class-Specific features, including Domain Invariance Testing (DIT) and Class Specificity Testing (CST), which mitigate the effects of spurious correlations introduced by confounders. DIT learns domain-related features of each source domain and removes them from inputs to isolate domain-invariant class-related features. DIT ensures domain invariance by aligning same-class features across different domains. Then, CST calculates soft labels for those features by comparing them with features learned in previous steps. We optimize the cross-entropy between the soft labels and their true labels, which enhances same-class similarity and different-class distinctiveness, thereby reinforcing class specificity. Extensive experiments on widely-used benchmarks demonstrate the effectiveness of our proposed algorithm. Additional visualizations further demonstrate that DICS effectively identifies the key features of each class in target domains.
Related papers
- EIANet: A Novel Domain Adaptation Approach to Maximize Class Distinction with Neural Collapse Principles [15.19374752514876]
Source-free domain adaptation (SFDA) aims to transfer knowledge from a labelled source domain to an unlabelled target domain.
A major challenge in SFDA is deriving accurate categorical information for the target domain.
We introduce a novel ETF-Informed Attention Network (EIANet) to separate class prototypes.
arXiv Detail & Related papers (2024-07-23T05:31:05Z) - Distributional Shift Adaptation using Domain-Specific Features [41.91388601229745]
In open-world scenarios, streaming big data can be Out-Of-Distribution (OOD)
We propose a simple yet effective approach that relies on correlations in general regardless of whether the features are invariant or not.
Our approach uses the most confidently predicted samples identified by an OOD base model to train a new model that effectively adapts to the target domain.
arXiv Detail & Related papers (2022-11-09T04:16:21Z) - Unsupervised Domain Adaptation via Style-Aware Self-intermediate Domain [52.783709712318405]
Unsupervised domain adaptation (UDA) has attracted considerable attention, which transfers knowledge from a label-rich source domain to a related but unlabeled target domain.
We propose a novel style-aware feature fusion method (SAFF) to bridge the large domain gap and transfer knowledge while alleviating the loss of class-discnative information.
arXiv Detail & Related papers (2022-09-05T10:06:03Z) - Domain-invariant Feature Exploration for Domain Generalization [35.99082628524934]
We argue that domain-invariant features should be originating from both internal and mutual sides.
We propose DIFEX for Domain-Invariant Feature EXploration.
Experiments on both time-series and visual benchmarks demonstrate that the proposed DIFEX achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-07-25T09:55:55Z) - Domain Attention Consistency for Multi-Source Domain Adaptation [100.25573559447551]
Key design is a feature channel attention module, which aims to identify transferable features (attributes)
Experiments on three MSDA benchmarks show that our DAC-Net achieves new state of the art performance on all of them.
arXiv Detail & Related papers (2021-11-06T15:56:53Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Adversarial Dual Distinct Classifiers for Unsupervised Domain Adaptation [67.83872616307008]
Unversarial Domain adaptation (UDA) attempts to recognize the unlabeled target samples by building a learning model from a differently-distributed labeled source domain.
In this paper, we propose a novel Adrial Dual Distincts Network (AD$2$CN) to align the source and target domain data distribution simultaneously with matching task-specific category boundaries.
To be specific, a domain-invariant feature generator is exploited to embed the source and target data into a latent common space with the guidance of discriminative cross-domain alignment.
arXiv Detail & Related papers (2020-08-27T01:29:10Z) - Domain Generalization via Optimal Transport with Metric Similarity
Learning [16.54463315552112]
Generalizing knowledge to unseen domains, where data and labels are unavailable, is crucial for machine learning models.
We tackle the domain generalization problem to learn from multiple source domains and generalize to a target domain with unknown statistics.
arXiv Detail & Related papers (2020-07-21T02:56:05Z) - Self-Challenging Improves Cross-Domain Generalization [81.99554996975372]
Convolutional Neural Networks (CNN) conduct image classification by activating dominant features that correlated with labels.
We introduce a simple training, Self-Challenging Representation (RSC), that significantly improves the generalization of CNN to the out-of-domain data.
RSC iteratively challenges the dominant features activated on the training data, and forces the network to activate remaining features that correlates with labels.
arXiv Detail & Related papers (2020-07-05T21:42:26Z) - Learning Class Regularized Features for Action Recognition [68.90994813947405]
We introduce a novel method named Class Regularization that performs class-based regularization of layer activations.
We show that using Class Regularization blocks in state-of-the-art CNN architectures for action recognition leads to systematic improvement gains of 1.8%, 1.2% and 1.4% on the Kinetics, UCF-101 and HMDB-51 datasets, respectively.
arXiv Detail & Related papers (2020-02-07T07:27:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.