Image Translation-Based Unsupervised Cross-Modality Domain Adaptation for Medical Image Segmentation
- URL: http://arxiv.org/abs/2502.15193v2
- Date: Mon, 24 Feb 2025 05:36:07 GMT
- Title: Image Translation-Based Unsupervised Cross-Modality Domain Adaptation for Medical Image Segmentation
- Authors: Tao Yang, Lisheng Wang,
- Abstract summary: Supervised deep learning usually faces more challenges in medical images than in natural images.<n>In this paper, we propose an unsupervised crossmodality domain adaptation method based on image translation.<n>The subtle differences between translated pseudo images and real images are overcome by self-training methods.
- Score: 7.064122118459271
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supervised deep learning usually faces more challenges in medical images than in natural images. Since annotations in medical images require the expertise of doctors and are more time-consuming and expensive. Thus, some researchers turn to unsupervised learning methods, which usually face inevitable performance drops. In addition, medical images may have been acquired at different medical centers with different scanners and under different image acquisition protocols, so the modalities of the medical images are often inconsistent. This modality difference (domain shift) also reduces the applicability of deep learning methods. In this regard, we propose an unsupervised crossmodality domain adaptation method based on image translation by transforming the source modality image with annotation into the unannotated target modality and using its annotation to achieve supervised learning of the target modality. In addition, the subtle differences between translated pseudo images and real images are overcome by self-training methods to further improve the task performance of deep learning. The proposed method showed mean Dice Similarity Coefficient (DSC) and Average Symmetric Surface Distance (ASSD) of $0.8351 \pm 0.1152$ and $1.6712 \pm 2.1948$ for vestibular schwannoma (VS), $0.8098 \pm 0.0233$ and $0.2317 \pm 0.1577$ for cochlea on the VS and cochlea segmentation task of the Cross-Modality Domain Adaptation (crossMoDA 2022) challenge validation phase leaderboard.
Related papers
- Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - CodaMal: Contrastive Domain Adaptation for Malaria Detection in Low-Cost Microscopes [51.5625352379093]
Malaria is a major health issue worldwide, and its diagnosis requires scalable solutions that can work effectively with low-cost microscopes (LCM)
Deep learning-based methods have shown success in computer-aided diagnosis from microscopic images.
These methods need annotated images that show cells affected by malaria parasites and their life stages.
Annotating images from LCM significantly increases the burden on medical experts compared to annotating images from high-cost microscopes (HCM)
arXiv Detail & Related papers (2024-02-16T06:57:03Z) - Fine-Grained Self-Supervised Learning with Jigsaw Puzzles for Medical
Image Classification [11.320414512937946]
Classifying fine-grained lesions is challenging due to minor and subtle differences in medical images.
We introduce Fine-Grained Self-Supervised Learning(FG-SSL) method for classifying subtle lesions in medical images.
We evaluate the proposed fine-grained self-supervised learning method on comprehensive experiments using various medical image recognition datasets.
arXiv Detail & Related papers (2023-08-10T02:08:15Z) - Diffusion Models for Counterfactual Generation and Anomaly Detection in Brain Images [39.94162291765236]
We present a weakly supervised method to generate a healthy version of a diseased image and then use it to obtain a pixel-wise anomaly map.
We employ a diffusion model trained on healthy samples and combine Denoising Diffusion Probabilistic Model (DDPM) and Denoising Implicit Model (DDIM) at each step of the sampling process.
arXiv Detail & Related papers (2023-08-03T21:56:50Z) - Trustworthy Deep Learning for Medical Image Segmentation [1.0152838128195467]
Major limitation of deep learning-based segmentation methods is their lack of robustness to variability in the image acquisition protocol.
In most cases, the manual segmentation of medical images requires highly skilled raters and is time-consuming.
This thesis introduces new mathematical and optimization methods to mitigate those limitations.
arXiv Detail & Related papers (2023-05-27T12:12:53Z) - GraVIS: Grouping Augmented Views from Independent Sources for
Dermatology Analysis [52.04899592688968]
We propose GraVIS, which is specifically optimized for learning self-supervised features from dermatology images.
GraVIS significantly outperforms its transfer learning and self-supervised learning counterparts in both lesion segmentation and disease classification tasks.
arXiv Detail & Related papers (2023-01-11T11:38:37Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - COSMOS: Cross-Modality Unsupervised Domain Adaptation for 3D Medical
Image Segmentation based on Target-aware Domain Translation and Iterative
Self-Training [6.513315990156929]
We propose a self-training based unsupervised domain adaptation framework for 3D medical image segmentation named COSMOS.
Our target-aware contrast conversion network translates source domain annotated T1 MRI to pseudo T2 MRI to enable segmentation training on target domain.
COSMOS won the 1textsuperscriptst place in the Cross-Modality Domain Adaptation (crossMoDA) challenge held in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021)
arXiv Detail & Related papers (2022-03-30T18:00:07Z) - Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain
Adaptation [9.659642285903418]
Cross-modality synthesis of medical images to reduce the costly annotation burden by radiologists.
We present a novel approach for image-to-image translation in medical images, capable of supervised or unsupervised (unpaired image data) setups.
arXiv Detail & Related papers (2021-03-05T16:22:31Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Discriminative Cross-Modal Data Augmentation for Medical Imaging
Applications [24.06277026586584]
Deep learning methods have shown great success in medical image analysis, they require a number of medical images to train.
Due to data privacy concerns and unavailability of medical annotators, it is oftentimes very difficult to obtain a lot of labeled medical images for model training.
We propose a discriminative unpaired image-to-image translation model which translates images in source modality into images in target modality.
arXiv Detail & Related papers (2020-10-07T15:07:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.