Invariant Content Synergistic Learning for Domain Generalization of
Medical Image Segmentation
- URL: http://arxiv.org/abs/2205.02845v1
- Date: Thu, 5 May 2022 08:13:17 GMT
- Title: Invariant Content Synergistic Learning for Domain Generalization of
Medical Image Segmentation
- Authors: Yuxin Kang, Hansheng Li, Xuan Zhao, Dongqing Hu, Feihong Liu, Lei Cui,
Jun Feng and Lin Yang
- Abstract summary: Deep convolution neural networks (DCNNs) often fail to maintain their robustness when confronting test data with the novel distribution.
In this paper, we propose a method, named Invariant Content Synergistic Learning (ICSL), to improve the generalization ability of DCNNs.
- Score: 13.708239594165061
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: While achieving remarkable success for medical image segmentation, deep
convolution neural networks (DCNNs) often fail to maintain their robustness
when confronting test data with the novel distribution. To address such a
drawback, the inductive bias of DCNNs is recently well-recognized.
Specifically, DCNNs exhibit an inductive bias towards image style (e.g.,
superficial texture) rather than invariant content (e.g., object shapes). In
this paper, we propose a method, named Invariant Content Synergistic Learning
(ICSL), to improve the generalization ability of DCNNs on unseen datasets by
controlling the inductive bias. First, ICSL mixes the style of training
instances to perturb the training distribution. That is to say, more diverse
domains or styles would be made available for training DCNNs. Based on the
perturbed distribution, we carefully design a dual-branches invariant content
synergistic learning strategy to prevent style-biased predictions and focus
more on the invariant content. Extensive experimental results on two typical
medical image segmentation tasks show that our approach performs better than
state-of-the-art domain generalization methods.
Related papers
- DCNN: Dual Cross-current Neural Networks Realized Using An Interactive Deep Learning Discriminator for Fine-grained Objects [48.65846477275723]
This study proposes novel dual-current neural networks (DCNN) to improve the accuracy of fine-grained image classification.
The main novel design features for constructing a weakly supervised learning backbone model DCNN include (a) extracting heterogeneous data, (b) keeping the feature map resolution unchanged, (c) expanding the receptive field, and (d) fusing global representations and local features.
arXiv Detail & Related papers (2024-05-07T07:51:28Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - On the Fly Neural Style Smoothing for Risk-Averse Domain Generalization [25.618051317035164]
State-of-the-art domain generalization (DG) classifiers have shown impressive performance across various tasks.
But they have shown a bias towards domain-dependent information, such as image styles, rather than domain-invariant information, such as image content.
This bias renders them unreliable for deployment in risk-sensitive scenarios such as autonomous driving.
We propose a novel inference procedure, Test-Time Neural Style Smoothing (TT-NSS), that uses a "style-smoothed" version of the DG classifier for prediction at test time.
arXiv Detail & Related papers (2023-07-17T15:31:58Z) - Treasure in Distribution: A Domain Randomization based Multi-Source
Domain Generalization for 2D Medical Image Segmentation [20.97329150274455]
We propose a multi-source domain generalization method called Treasure in Distribution (TriD)
TriD constructs an unprecedented search space to obtain the model with strong robustness by randomly sampling from a uniform distribution.
Experiments on two medical segmentation tasks demonstrate that our TriD achieves superior generalization performance on unseen target-domain data.
arXiv Detail & Related papers (2023-05-31T15:33:57Z) - Domain Generalization with Adversarial Intensity Attack for Medical
Image Segmentation [27.49427483473792]
In real-world scenarios, it is common for models to encounter data from new and different domains to which they were not exposed to during training.
domain generalization (DG) is a promising direction as it enables models to handle data from previously unseen domains.
We introduce a novel DG method called Adversarial Intensity Attack (AdverIN), which leverages adversarial training to generate training data with an infinite number of styles.
arXiv Detail & Related papers (2023-04-05T19:40:51Z) - Unsupervised Domain Adaptation Using Feature Disentanglement And GCNs
For Medical Image Classification [5.6512908295414]
We propose an unsupervised domain adaptation approach that uses graph neural networks and, disentangled semantic and domain invariant structural features.
We test the proposed method for classification on two challenging medical image datasets with distribution shifts.
Experiments show our method achieves state-of-the-art results compared to other domain adaptation methods.
arXiv Detail & Related papers (2022-06-27T09:02:16Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Neuron Coverage-Guided Domain Generalization [37.77033512313927]
This paper focuses on the domain generalization task where domain knowledge is unavailable, and even worse, only samples from a single domain can be utilized during training.
Our motivation originates from the recent progresses in deep neural network (DNN) testing, which has shown that maximizing neuron coverage of DNN can help to explore possible defects of DNN.
arXiv Detail & Related papers (2021-02-27T14:26:53Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.