ConStyX: Content Style Augmentation for Generalizable Medical Image Segmentation
- URL: http://arxiv.org/abs/2506.10675v1
- Date: Thu, 12 Jun 2025 13:04:32 GMT
- Title: ConStyX: Content Style Augmentation for Generalizable Medical Image Segmentation
- Authors: Xi Chen, Zhiqiang Shen, Peng Cao, Jinzhu Yang, Osmar R. Zaiane,
- Abstract summary: Domain Generalization (DG) aims to train a robust model with strong generalizability.<n>We propose a novel domain randomization-based DG method, called content style augmentation (ConStyX)<n>ConStyX augments the content and style of training data, allowing the augmented training data to better cover a wider range of data domains.
- Score: 26.01939587264357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical images are usually collected from multiple domains, leading to domain shifts that impair the performance of medical image segmentation models. Domain Generalization (DG) aims to address this issue by training a robust model with strong generalizability. Recently, numerous domain randomization-based DG methods have been proposed. However, these methods suffer from the following limitations: 1) constrained efficiency of domain randomization due to their exclusive dependence on image style perturbation, and 2) neglect of the adverse effects of over-augmented images on model training. To address these issues, we propose a novel domain randomization-based DG method, called content style augmentation (ConStyX), for generalizable medical image segmentation. Specifically, ConStyX 1) augments the content and style of training data, allowing the augmented training data to better cover a wider range of data domains, and 2) leverages well-augmented features while mitigating the negative effects of over-augmented features during model training. Extensive experiments across multiple domains demonstrate that our ConStyX achieves superior generalization performance. The code is available at https://github.com/jwxsp1/ConStyX.
Related papers
- Grounding Stylistic Domain Generalization with Quantitative Domain Shift Measures and Synthetic Scene Images [63.58800688320182]
Domain Generalization is a challenging task in machine learning.
Current methodology lacks quantitative understanding about shifts in stylistic domain.
We introduce a new DG paradigm to address these risks.
arXiv Detail & Related papers (2024-05-24T22:13:31Z) - Spectral Adversarial MixUp for Few-Shot Unsupervised Domain Adaptation [72.70876977882882]
Domain shift is a common problem in clinical applications, where the training images (source domain) and the test images (target domain) are under different distributions.
We propose a novel method for Few-Shot Unsupervised Domain Adaptation (FSUDA), where only a limited number of unlabeled target domain samples are available for training.
arXiv Detail & Related papers (2023-09-03T16:02:01Z) - Adversarial Bayesian Augmentation for Single-Source Domain
Generalization [47.11368295629681]
We present Adrialversa Bayesian Augmentation (ABA), a novel algorithm that learns to generate image augmentations in the challenging single-source domain generalization setting.
ABA draws on the strengths of adversarial learning and Bayesian neural networks to guide the generation of diverse data augmentations.
We demonstrate the strength of ABA on several types of domain shift including style shift, subpopulation shift, and shift in the medical imaging setting.
arXiv Detail & Related papers (2023-07-18T18:01:30Z) - Frequency-mixed Single-source Domain Generalization for Medical Image
Segmentation [29.566769388674473]
The scarcity of medical image segmentation poses challenges in collecting sufficient training data for deep learning models.
We propose a novel approach called the Frequency-mixed Single-source Domain Generalization method (FreeSDG)
Experimental results on five datasets of three modalities demonstrate the effectiveness of the proposed algorithm.
arXiv Detail & Related papers (2023-07-18T06:44:45Z) - Treasure in Distribution: A Domain Randomization based Multi-Source
Domain Generalization for 2D Medical Image Segmentation [20.97329150274455]
We propose a multi-source domain generalization method called Treasure in Distribution (TriD)
TriD constructs an unprecedented search space to obtain the model with strong robustness by randomly sampling from a uniform distribution.
Experiments on two medical segmentation tasks demonstrate that our TriD achieves superior generalization performance on unseen target-domain data.
arXiv Detail & Related papers (2023-05-31T15:33:57Z) - Domain Adaptive and Generalizable Network Architectures and Training
Strategies for Semantic Image Segmentation [108.33885637197614]
Unsupervised domain adaptation (UDA) and domain generalization (DG) enable machine learning models trained on a source domain to perform well on unlabeled or unseen target domains.
We propose HRDA, a multi-resolution framework for UDA&DG, that combines the strengths of small high-resolution crops to preserve fine segmentation details and large low-resolution crops to capture long-range context dependencies with a learned scale attention.
arXiv Detail & Related papers (2023-04-26T15:18:45Z) - Domain Generalization with Adversarial Intensity Attack for Medical
Image Segmentation [27.49427483473792]
In real-world scenarios, it is common for models to encounter data from new and different domains to which they were not exposed to during training.
domain generalization (DG) is a promising direction as it enables models to handle data from previously unseen domains.
We introduce a novel DG method called Adversarial Intensity Attack (AdverIN), which leverages adversarial training to generate training data with an infinite number of styles.
arXiv Detail & Related papers (2023-04-05T19:40:51Z) - Single-domain Generalization in Medical Image Segmentation via Test-time
Adaptation from Shape Dictionary [64.5632303184502]
Domain generalization typically requires data from multiple source domains for model learning.
This paper studies the important yet challenging single domain generalization problem, in which a model is learned under the worst-case scenario with only one source domain to directly generalize to different unseen target domains.
We present a novel approach to address this problem in medical image segmentation, which extracts and integrates the semantic shape prior information of segmentation that are invariant across domains.
arXiv Detail & Related papers (2022-06-29T08:46:27Z) - Contrastive Domain Disentanglement for Generalizable Medical Image
Segmentation [12.863227646939563]
We propose Contrastive Disentangle Domain (CDD) network for generalizable medical image segmentation.
We first introduce a disentangle network to decompose medical images into an anatomical representation factor and a modality representation factor.
We then propose a domain augmentation strategy that can randomly generate new domains for model generalization training.
arXiv Detail & Related papers (2022-05-13T10:32:41Z) - Beyond ImageNet Attack: Towards Crafting Adversarial Examples for
Black-box Domains [80.11169390071869]
Adversarial examples have posed a severe threat to deep neural networks due to their transferable nature.
We propose a Beyond ImageNet Attack (BIA) to investigate the transferability towards black-box domains.
Our methods outperform state-of-the-art approaches by up to 7.71% (towards coarse-grained domains) and 25.91% (towards fine-grained domains) on average.
arXiv Detail & Related papers (2022-01-27T14:04:27Z) - Causality-inspired Single-source Domain Generalization for Medical Image
Segmentation [12.697945585457441]
We propose a simple data augmentation approach to expose a segmentation model to synthesized domain-shifted training examples.
Specifically, 1) to make the deep model robust to discrepancies in image intensities and textures, we employ a family of randomly-weighted shallow networks.
We remove spurious correlations among objects in an image that might be taken by the network as domain-specific clues for making predictions, and they may break on unseen domains.
arXiv Detail & Related papers (2021-11-24T14:45:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.