Cross-Domain Feature Augmentation for Domain Generalization
- URL: http://arxiv.org/abs/2405.08586v1
- Date: Tue, 14 May 2024 13:24:19 GMT
- Title: Cross-Domain Feature Augmentation for Domain Generalization
- Authors: Yingnan Liu, Yingtian Zou, Rui Qiao, Fusheng Liu, Mong Li Lee, Wynne Hsu,
- Abstract summary: We propose a cross-domain feature augmentation method named XDomainMix.
Experiments on widely used benchmark datasets demonstrate that our proposed method is able to achieve state-of-the-art performance.
- Score: 16.174824932970004
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Domain generalization aims to develop models that are robust to distribution shifts. Existing methods focus on learning invariance across domains to enhance model robustness, and data augmentation has been widely used to learn invariant predictors, with most methods performing augmentation in the input space. However, augmentation in the input space has limited diversity whereas in the feature space is more versatile and has shown promising results. Nonetheless, feature semantics is seldom considered and existing feature augmentation methods suffer from a limited variety of augmented features. We decompose features into class-generic, class-specific, domain-generic, and domain-specific components. We propose a cross-domain feature augmentation method named XDomainMix that enables us to increase sample diversity while emphasizing the learning of invariant representations to achieve domain generalization. Experiments on widely used benchmark datasets demonstrate that our proposed method is able to achieve state-of-the-art performance. Quantitative analysis indicates that our feature augmentation approach facilitates the learning of effective models that are invariant across different domains.
Related papers
- Boundless Across Domains: A New Paradigm of Adaptive Feature and Cross-Attention for Domain Generalization in Medical Image Segmentation [1.93061220186624]
Domain-invariant representation learning is a powerful method for domain generalization.
Previous approaches face challenges such as high computational demands, training instability, and limited effectiveness with high-dimensional data.
We propose an Adaptive Feature Blending (AFB) method that generates out-of-distribution samples while exploring the in-distribution space.
arXiv Detail & Related papers (2024-11-22T12:06:24Z) - Feature-Space Semantic Invariance: Enhanced OOD Detection for Open-Set Domain Generalization [10.38552112657656]
We propose a unified framework for open-set domain generalization by introducing Feature-space Semantic Invariance (FSI)
FSI maintains semantic consistency across different domains within the feature space, enabling more accurate detection of OOD instances in unseen domains.
We also adopt a generative model to produce synthetic data with novel domain styles or class labels, enhancing model robustness.
arXiv Detail & Related papers (2024-11-11T21:51:45Z) - Generalize or Detect? Towards Robust Semantic Segmentation Under Multiple Distribution Shifts [56.57141696245328]
In open-world scenarios, where both novel classes and domains may exist, an ideal segmentation model should detect anomaly classes for safety.
Existing methods often struggle to distinguish between domain-level and semantic-level distribution shifts.
arXiv Detail & Related papers (2024-11-06T11:03:02Z) - Domain Expansion and Boundary Growth for Open-Set Single-Source Domain Generalization [70.02187124865627]
Open-set single-source domain generalization aims to use a single-source domain to learn a robust model that can be generalized to unknown target domains.
We propose a novel learning approach based on domain expansion and boundary growth to expand the scarce source samples.
Our approach can achieve significant improvements and reach state-of-the-art performance on several cross-domain image classification datasets.
arXiv Detail & Related papers (2024-11-05T09:08:46Z) - Causality-inspired Latent Feature Augmentation for Single Domain Generalization [13.735443005394773]
Single domain generalization (Single-DG) intends to develop a generalizable model with only one single training domain to perform well on other unknown target domains.
Under the domain-hungry configuration, how to expand the coverage of source domain and find intrinsic causal features across different distributions is the key to enhancing the models' generalization ability.
We propose a novel causality-inspired latent feature augmentation method for Single-DG by learning the meta-knowledge of feature-level transformation based on causal learning and interventions.
arXiv Detail & Related papers (2024-06-10T02:42:25Z) - Diverse Intra- and Inter-Domain Activity Style Fusion for Cross-Person Generalization in Activity Recognition [8.850516669999292]
Existing domain generalization methods often face challenges in capturing intra- and inter-domain style diversity.
We propose a process conceptualized as domain padding to enrich the domain diversity.
We introduce a style-fused sampling strategy to enhance data generation diversity.
Our approach outperforms state-of-the-art DG methods in all human activity recognition tasks.
arXiv Detail & Related papers (2024-06-07T03:37:30Z) - MetaDefa: Meta-learning based on Domain Enhancement and Feature
Alignment for Single Domain Generalization [12.095382249996032]
A novel meta-learning method based on domain enhancement and feature alignment (MetaDefa) is proposed to improve the model generalization performance.
In this paper, domain-invariant features can be fully explored by focusing on similar target regions between source and augmented domains feature space.
Extensive experiments on two publicly available datasets show that MetaDefa has significant generalization performance advantages in unknown multiple target domains.
arXiv Detail & Related papers (2023-11-27T15:13:02Z) - A Novel Cross-Perturbation for Single Domain Generalization [54.612933105967606]
Single domain generalization aims to enhance the ability of the model to generalize to unknown domains when trained on a single source domain.
The limited diversity in the training data hampers the learning of domain-invariant features, resulting in compromised generalization performance.
We propose CPerb, a simple yet effective cross-perturbation method to enhance the diversity of the training data.
arXiv Detail & Related papers (2023-08-02T03:16:12Z) - Improving Diversity with Adversarially Learned Transformations for
Domain Generalization [81.26960899663601]
We present a novel framework that uses adversarially learned transformations (ALT) using a neural network to model plausible, yet hard image transformations.
We show that ALT can naturally work with existing diversity modules to produce highly distinct, and large transformations of the source domain leading to state-of-the-art performance.
arXiv Detail & Related papers (2022-06-15T18:05:24Z) - A Novel Mix-normalization Method for Generalizable Multi-source Person
Re-identification [49.548815417844786]
Person re-identification (Re-ID) has achieved great success in the supervised scenario.
It is difficult to directly transfer the supervised model to arbitrary unseen domains due to the model overfitting to the seen source domains.
We propose MixNorm, which consists of domain-aware mix-normalization (DMN) and domain-ware center regularization (DCR)
arXiv Detail & Related papers (2022-01-24T18:09:38Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.