Gradient-Map-Guided Adaptive Domain Generalization for Cross Modality
MRI Segmentation
- URL: http://arxiv.org/abs/2311.09737v1
- Date: Thu, 16 Nov 2023 10:07:27 GMT
- Title: Gradient-Map-Guided Adaptive Domain Generalization for Cross Modality
MRI Segmentation
- Authors: Bingnan Li, Zhitong Gao, Xuming He
- Abstract summary: Cross-modal MRI segmentation is of great value for computer-aided medical diagnosis, enabling flexible data acquisition and model generalization.
Most existing methods have difficulty in handling local variations in domain shift and typically require a significant amount of data for training.
We propose a novel adaptive domain generalization framework, which integrates a learning-free cross-domain representation based on image gradient maps.
- Score: 14.209197648189203
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-modal MRI segmentation is of great value for computer-aided medical
diagnosis, enabling flexible data acquisition and model generalization.
However, most existing methods have difficulty in handling local variations in
domain shift and typically require a significant amount of data for training,
which hinders their usage in practice. To address these problems, we propose a
novel adaptive domain generalization framework, which integrates a
learning-free cross-domain representation based on image gradient maps and a
class prior-informed test-time adaptation strategy for mitigating local domain
shift. We validate our approach on two multi-modal MRI datasets with six
cross-modal segmentation tasks. Across all the task settings, our method
consistently outperforms competing approaches and shows a stable performance
even with limited training data.
Related papers
- Informative Data Mining for One-Shot Cross-Domain Semantic Segmentation [84.82153655786183]
We propose a novel framework called Informative Data Mining (IDM) to enable efficient one-shot domain adaptation for semantic segmentation.
IDM provides an uncertainty-based selection criterion to identify the most informative samples, which facilitates quick adaptation and reduces redundant training.
Our approach outperforms existing methods and achieves a new state-of-the-art one-shot performance of 56.7%/55.4% on the GTA5/SYNTHIA to Cityscapes adaptation tasks.
arXiv Detail & Related papers (2023-09-25T15:56:01Z) - Single-domain Generalization in Medical Image Segmentation via Test-time
Adaptation from Shape Dictionary [64.5632303184502]
Domain generalization typically requires data from multiple source domains for model learning.
This paper studies the important yet challenging single domain generalization problem, in which a model is learned under the worst-case scenario with only one source domain to directly generalize to different unseen target domains.
We present a novel approach to address this problem in medical image segmentation, which extracts and integrates the semantic shape prior information of segmentation that are invariant across domains.
arXiv Detail & Related papers (2022-06-29T08:46:27Z) - Self-semantic contour adaptation for cross modality brain tumor
segmentation [13.260109561599904]
We propose exploiting low-level edge information to facilitate the adaptation as a precursor task.
The precise contour then provides spatial information to guide the semantic adaptation.
We evaluate our framework on the BraTS2018 database for cross-modality segmentation of brain tumors.
arXiv Detail & Related papers (2022-01-13T15:16:55Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Semi-supervised Meta-learning with Disentanglement for
Domain-generalised Medical Image Segmentation [15.351113774542839]
Generalising models to new data from new centres (termed here domains) remains a challenge.
We propose a novel semi-supervised meta-learning framework with disentanglement.
We show that the proposed method is robust on different segmentation tasks and achieves state-of-the-art generalisation performance on two public benchmarks.
arXiv Detail & Related papers (2021-06-24T19:50:07Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Shape-aware Meta-learning for Generalizing Prostate MRI Segmentation to
Unseen Domains [68.73614619875814]
We present a novel shape-aware meta-learning scheme to improve the model generalization in prostate MRI segmentation.
Experimental results show that our approach outperforms many state-of-the-art generalization methods consistently across all six settings of unseen domains.
arXiv Detail & Related papers (2020-07-04T07:56:02Z) - Cross-Domain Segmentation with Adversarial Loss and Covariate Shift for
Biomedical Imaging [2.1204495827342438]
This manuscript aims to implement a novel model that can learn robust representations from cross-domain data by encapsulating distinct and shared patterns from different modalities.
The tests on CT and MRI liver data acquired in routine clinical trials show that the proposed model outperforms all other baseline with a large margin.
arXiv Detail & Related papers (2020-06-08T07:35:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.