Domain Generalization with Correlated Style Uncertainty
- URL: http://arxiv.org/abs/2212.09950v3
- Date: Mon, 28 Aug 2023 15:09:46 GMT
- Title: Domain Generalization with Correlated Style Uncertainty
- Authors: Zheyuan Zhang, Bin Wang, Debesh Jha, Ugur Demir, Ulas Bagci
- Abstract summary: Style augmentation is a strong DG method taking advantage of instance-specific feature statistics.
We introduce Correlated Style Uncertainty (CSU), surpassing the limitations of linear generalization in style statistic space.
Our method's efficacy is established through extensive experimentation on diverse cross-domain computer vision and medical imaging classification tasks.
- Score: 4.844240089234632
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization (DG) approaches intend to extract domain invariant
features that can lead to a more robust deep learning model. In this regard,
style augmentation is a strong DG method taking advantage of instance-specific
feature statistics containing informative style characteristics to synthetic
novel domains. While it is one of the state-of-the-art methods, prior works on
style augmentation have either disregarded the interdependence amongst distinct
feature channels or have solely constrained style augmentation to linear
interpolation. To address these research gaps, in this work, we introduce a
novel augmentation approach, named Correlated Style Uncertainty (CSU),
surpassing the limitations of linear interpolation in style statistic space and
simultaneously preserving vital correlation information. Our method's efficacy
is established through extensive experimentation on diverse cross-domain
computer vision and medical imaging classification tasks: PACS, Office-Home,
and Camelyon17 datasets, and the Duke-Market1501 instance retrieval task. The
results showcase a remarkable improvement margin over existing state-of-the-art
techniques. The source code is available https://github.com/freshman97/CSU.
Related papers
- Domain Expansion and Boundary Growth for Open-Set Single-Source Domain Generalization [70.02187124865627]
Open-set single-source domain generalization aims to use a single-source domain to learn a robust model that can be generalized to unknown target domains.
We propose a novel learning approach based on domain expansion and boundary growth to expand the scarce source samples.
Our approach can achieve significant improvements and reach state-of-the-art performance on several cross-domain image classification datasets.
arXiv Detail & Related papers (2024-11-05T09:08:46Z) - Diverse Intra- and Inter-Domain Activity Style Fusion for Cross-Person Generalization in Activity Recognition [8.850516669999292]
Existing domain generalization methods often face challenges in capturing intra- and inter-domain style diversity.
We propose a process conceptualized as domain padding to enrich the domain diversity.
We introduce a style-fused sampling strategy to enhance data generation diversity.
Our approach outperforms state-of-the-art DG methods in all human activity recognition tasks.
arXiv Detail & Related papers (2024-06-07T03:37:30Z) - Complex Style Image Transformations for Domain Generalization in Medical Images [6.635679521775917]
Domain generalization techniques aim to approach unknown domains from a single data source.
In this paper we introduce a novel framework, named CompStyle, which leverages style transfer and adversarial training.
We provide results from experiments on semantic segmentation on prostate data and corruption robustness on cardiac data.
arXiv Detail & Related papers (2024-06-01T04:57:31Z) - StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization [85.18995948334592]
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain.
State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data.
We propose emphStyDeSty, which explicitly accounts for the alignment of the source and pseudo domains in the process of data augmentation.
arXiv Detail & Related papers (2024-06-01T02:41:34Z) - HCVP: Leveraging Hierarchical Contrastive Visual Prompt for Domain
Generalization [69.33162366130887]
Domain Generalization (DG) endeavors to create machine learning models that excel in unseen scenarios by learning invariant features.
We introduce a novel method designed to supplement the model with domain-level and task-specific characteristics.
This approach aims to guide the model in more effectively separating invariant features from specific characteristics, thereby boosting the generalization.
arXiv Detail & Related papers (2024-01-18T04:23:21Z) - A Novel Cross-Perturbation for Single Domain Generalization [54.612933105967606]
Single domain generalization aims to enhance the ability of the model to generalize to unknown domains when trained on a single source domain.
The limited diversity in the training data hampers the learning of domain-invariant features, resulting in compromised generalization performance.
We propose CPerb, a simple yet effective cross-perturbation method to enhance the diversity of the training data.
arXiv Detail & Related papers (2023-08-02T03:16:12Z) - Style Interleaved Learning for Generalizable Person Re-identification [69.03539634477637]
We propose a novel style interleaved learning (IL) framework for DG ReID training.
Unlike conventional learning strategies, IL incorporates two forward propagations and one backward propagation for each iteration.
We show that our model consistently outperforms state-of-the-art methods on large-scale benchmarks for DG ReID.
arXiv Detail & Related papers (2022-07-07T07:41:32Z) - Feature-based Style Randomization for Domain Generalization [27.15070576861912]
Domain generalization (DG) aims to first learn a generic model on multiple source domains and then directly generalize to an arbitrary unseen target domain without any additional adaptions.
This paper develops a simple yet effective feature-based style randomization module to achieve feature-level augmentation.
Compared with existing image-level augmentation, our feature-level augmentation favors a more goal-oriented and sample-diverse way.
arXiv Detail & Related papers (2021-06-06T16:34:44Z) - Learning to Combine: Knowledge Aggregation for Multi-Source Domain
Adaptation [56.694330303488435]
We propose a Learning to Combine for Multi-Source Domain Adaptation (LtC-MSDA) framework.
In the nutshell, a knowledge graph is constructed on the prototypes of various domains to realize the information propagation among semantically adjacent representations.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-07-17T07:52:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.