MSCDA: Multi-level Semantic-guided Contrast Improves Unsupervised Domain
Adaptation for Breast MRI Segmentation in Small Datasets
- URL: http://arxiv.org/abs/2301.02554v2
- Date: Thu, 8 Jun 2023 09:25:14 GMT
- Title: MSCDA: Multi-level Semantic-guided Contrast Improves Unsupervised Domain
Adaptation for Breast MRI Segmentation in Small Datasets
- Authors: Sheng Kuang, Henry C. Woodruff, Renee Granzier, Thiemo J.A. van
Nijnatten, Marc B.I. Lobbes, Marjolein L. Smidt, Philippe Lambin, Siamak
Mehrkanoon
- Abstract summary: We propose a novel Multi-level Semantic-guided Contrastive Domain Adaptation framework.
Our approach incorporates self-training with contrastive learning to align feature representations between domains.
In particular, we extend the contrastive loss by incorporating pixel-to-pixel, pixel-to-centroid, and centroid-to-centroid contrasts.
- Score: 5.272836235045653
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep learning (DL) applied to breast tissue segmentation in magnetic
resonance imaging (MRI) has received increased attention in the last decade,
however, the domain shift which arises from different vendors, acquisition
protocols, and biological heterogeneity, remains an important but challenging
obstacle on the path towards clinical implementation. In this paper, we propose
a novel Multi-level Semantic-guided Contrastive Domain Adaptation (MSCDA)
framework to address this issue in an unsupervised manner. Our approach
incorporates self-training with contrastive learning to align feature
representations between domains. In particular, we extend the contrastive loss
by incorporating pixel-to-pixel, pixel-to-centroid, and centroid-to-centroid
contrasts to better exploit the underlying semantic information of the image at
different levels. To resolve the data imbalance problem, we utilize a
category-wise cross-domain sampling strategy to sample anchors from target
images and build a hybrid memory bank to store samples from source images. We
have validated MSCDA with a challenging task of cross-domain breast MRI
segmentation between datasets of healthy volunteers and invasive breast cancer
patients. Extensive experiments show that MSCDA effectively improves the
model's feature alignment capabilities between domains, outperforming
state-of-the-art methods. Furthermore, the framework is shown to be
label-efficient, achieving good performance with a smaller source dataset. The
code is publicly available at \url{https://github.com/ShengKuangCN/MSCDA}.
Related papers
- Unsupervised Modality Adaptation with Text-to-Image Diffusion Models for Semantic Segmentation [54.96563068182733]
We propose Modality Adaptation with text-to-image Diffusion Models (MADM) for semantic segmentation task.
MADM utilizes text-to-image diffusion models pre-trained on extensive image-text pairs to enhance the model's cross-modality capabilities.
We show that MADM achieves state-of-the-art adaptation performance across various modality tasks, including images to depth, infrared, and event modalities.
arXiv Detail & Related papers (2024-10-29T03:49:40Z) - Improving Anomaly Segmentation with Multi-Granularity Cross-Domain
Alignment [17.086123737443714]
Anomaly segmentation plays a pivotal role in identifying atypical objects in images, crucial for hazard detection in autonomous driving systems.
While existing methods demonstrate noteworthy results on synthetic data, they often fail to consider the disparity between synthetic and real-world data domains.
We introduce the Multi-Granularity Cross-Domain Alignment framework, tailored to harmonize features across domains at both the scene and individual sample levels.
arXiv Detail & Related papers (2023-08-16T22:54:49Z) - SMC-UDA: Structure-Modal Constraint for Unsupervised Cross-Domain Renal
Segmentation [100.86339246424541]
We propose a novel Structure-Modal Constrained (SMC) UDA framework based on a discriminative paradigm and introduce edge structure as a bridge between domains.
With the structure-constrained self-learning and progressive ROI, our methods segment the kidney by locating the 3D spatial structure of the edge.
experiments show that our proposed SMC-UDA has a strong generalization and outperforms generative UDA methods.
arXiv Detail & Related papers (2023-06-14T02:57:23Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy
Minimisation for Multi-modal Cardiac Image Segmentation [10.417009344120917]
We present a novel UDA method for multi-modal cardiac image segmentation.
The proposed method is based on adversarial learning and adapts network features between source and target domain in different spaces.
We validated our method on two cardiac datasets by adapting from the annotated source domain to the unannotated target domain.
arXiv Detail & Related papers (2021-03-15T08:59:44Z) - Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain
Adaptation [9.659642285903418]
Cross-modality synthesis of medical images to reduce the costly annotation burden by radiologists.
We present a novel approach for image-to-image translation in medical images, capable of supervised or unsupervised (unpaired image data) setups.
arXiv Detail & Related papers (2021-03-05T16:22:31Z) - Unsupervised Instance Segmentation in Microscopy Images via Panoptic
Domain Adaptation and Task Re-weighting [86.33696045574692]
We propose a Cycle Consistency Panoptic Domain Adaptive Mask R-CNN (CyC-PDAM) architecture for unsupervised nuclei segmentation in histopathology images.
We first propose a nuclei inpainting mechanism to remove the auxiliary generated objects in the synthesized images.
Secondly, a semantic branch with a domain discriminator is designed to achieve panoptic-level domain adaptation.
arXiv Detail & Related papers (2020-05-05T11:08:26Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z) - Domain Adaptive Medical Image Segmentation via Adversarial Learning of
Disease-Specific Spatial Patterns [6.298270929323396]
We propose an unsupervised domain adaptation framework for boosting image segmentation performance across multiple domains.
We enforce architectures to be adaptive to new data by rejecting improbable segmentation patterns and implicitly learning through semantic and boundary information.
We demonstrate that recalibrating the deep networks on a few unlabeled images from the target domain improves the segmentation accuracy significantly.
arXiv Detail & Related papers (2020-01-25T13:48:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.