HCDG: A Hierarchical Consistency Framework for Domain Generalization on
Medical Image Segmentation
- URL: http://arxiv.org/abs/2109.05742v4
- Date: Thu, 24 Aug 2023 07:31:59 GMT
- Title: HCDG: A Hierarchical Consistency Framework for Domain Generalization on
Medical Image Segmentation
- Authors: Yijun Yang, Shujun Wang, Lei Zhu, Lequan Yu
- Abstract summary: We present a novel Hierarchical Consistency framework for Domain Generalization (HCDG)
For the Extrinsic Consistency, we leverage the knowledge across multiple source domains to enforce data-level consistency.
For the Intrinsic Consistency, we perform task-level consistency for the same instance under the dual-task scenario.
- Score: 33.623948922908184
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern deep neural networks struggle to transfer knowledge and generalize
across diverse domains when deployed to real-world applications. Currently,
domain generalization (DG) is introduced to learn a universal representation
from multiple domains to improve the network generalization ability on unseen
domains. However, previous DG methods only focus on the data-level consistency
scheme without considering the synergistic regularization among different
consistency schemes. In this paper, we present a novel Hierarchical Consistency
framework for Domain Generalization (HCDG) by integrating Extrinsic Consistency
and Intrinsic Consistency synergistically. Particularly, for the Extrinsic
Consistency, we leverage the knowledge across multiple source domains to
enforce data-level consistency. To better enhance such consistency, we design a
novel Amplitude Gaussian-mixing strategy into Fourier-based data augmentation
called DomainUp. For the Intrinsic Consistency, we perform task-level
consistency for the same instance under the dual-task scenario. We evaluate the
proposed HCDG framework on two medical image segmentation tasks, i.e., optic
cup/disc segmentation on fundus images and prostate MRI segmentation. Extensive
experimental results manifest the effectiveness and versatility of our HCDG
framework.
Related papers
- FedGCA: Global Consistent Augmentation Based Single-Source Federated Domain Generalization [29.989092118578103]
Federated Domain Generalization (FedDG) aims to train the global model for generalization ability to unseen domains with multi-domain training samples.
Clients in federated learning networks are often confined to a single, non-IID domain due to inherent sampling and temporal limitations.
We introduce the Federated Global Consistent Augmentation (FedGCA) method, which incorporates a style-complement module to augment data samples with diverse domain styles.
arXiv Detail & Related papers (2024-09-23T02:24:46Z) - PointDGMamba: Domain Generalization of Point Cloud Classification via Generalized State Space Model [77.00221501105788]
Domain Generalization (DG) has been recently explored to improve the generalizability of point cloud classification (PCC) models toward unseen domains.
We present the first work that studies the generalizability of state space models (SSMs) in DG PCC.
We propose a novel framework, PointDGMamba, that excels in strong generalizability toward unseen domains.
arXiv Detail & Related papers (2024-08-24T12:53:48Z) - Disentangling Masked Autoencoders for Unsupervised Domain Generalization [57.56744870106124]
Unsupervised domain generalization is fast gaining attention but is still far from well-studied.
Disentangled Masked Auto (DisMAE) aims to discover the disentangled representations that faithfully reveal intrinsic features.
DisMAE co-trains the asymmetric dual-branch architecture with semantic and lightweight variation encoders.
arXiv Detail & Related papers (2024-07-10T11:11:36Z) - Grounding Stylistic Domain Generalization with Quantitative Domain Shift Measures and Synthetic Scene Images [63.58800688320182]
Domain Generalization is a challenging task in machine learning.
Current methodology lacks quantitative understanding about shifts in stylistic domain.
We introduce a new DG paradigm to address these risks.
arXiv Detail & Related papers (2024-05-24T22:13:31Z) - Prompt-driven Latent Domain Generalization for Medical Image
Classification [23.914889221925552]
We propose a novel framework for medical image classification without relying on domain labels.
PLDG consists of unsupervised domain discovery and prompt learning.
Our method can achieve comparable or even superior performance than conventional DG algorithms.
arXiv Detail & Related papers (2024-01-05T05:24:07Z) - Single-domain Generalization in Medical Image Segmentation via Test-time
Adaptation from Shape Dictionary [64.5632303184502]
Domain generalization typically requires data from multiple source domains for model learning.
This paper studies the important yet challenging single domain generalization problem, in which a model is learned under the worst-case scenario with only one source domain to directly generalize to different unseen target domains.
We present a novel approach to address this problem in medical image segmentation, which extracts and integrates the semantic shape prior information of segmentation that are invariant across domains.
arXiv Detail & Related papers (2022-06-29T08:46:27Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - TAL: Two-stream Adaptive Learning for Generalizable Person
Re-identification [115.31432027711202]
We argue that both domain-specific and domain-invariant features are crucial for improving the generalization ability of re-id models.
We name two-stream adaptive learning (TAL) to simultaneously model these two kinds of information.
Our framework can be applied to both single-source and multi-source domain generalization tasks.
arXiv Detail & Related papers (2021-11-29T01:27:42Z) - Reappraising Domain Generalization in Neural Networks [8.06370138649329]
Domain generalization (DG) of machine learning algorithms is defined as their ability to learn a domain agnostic hypothesis from multiple training distributions.
We find that a straightforward Empirical Risk Minimization (ERM) baseline consistently outperforms existing DG methods.
We propose a classwise-DG formulation, where for each class, we randomly select one of the domains and keep it aside for testing.
arXiv Detail & Related papers (2021-10-15T10:06:40Z) - Better Pseudo-label: Joint Domain-aware Label and Dual-classifier for
Semi-supervised Domain Generalization [26.255457629490135]
We propose a novel framework via joint domain-aware labels and dual-classifier to produce high-quality pseudo-labels.
To predict accurate pseudo-labels under domain shift, a domain-aware pseudo-labeling module is developed.
Also, considering inconsistent goals between generalization and pseudo-labeling, we employ a dual-classifier to independently perform pseudo-labeling and domain generalization in the training process.
arXiv Detail & Related papers (2021-10-10T15:17:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.