M-GenSeg: Domain Adaptation For Target Modality Tumor Segmentation With
Annotation-Efficient Supervision
- URL: http://arxiv.org/abs/2212.07276v2
- Date: Sun, 30 Jul 2023 12:03:24 GMT
- Title: M-GenSeg: Domain Adaptation For Target Modality Tumor Segmentation With
Annotation-Efficient Supervision
- Authors: Malo Alefsen de Boisredon d'Assier and Eugene Vorontsov and Samuel
Kadoury
- Abstract summary: M-GenSeg is a new semi-supervised generative training strategy for cross-modality tumor segmentation.
We evaluate the performance on a brain tumor segmentation dataset composed of four different contrast sequences.
Unlike the prior art, M-GenSeg also introduces the ability to train with a partially annotated source modality.
- Score: 4.023899199756184
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated medical image segmentation using deep neural networks typically
requires substantial supervised training. However, these models fail to
generalize well across different imaging modalities. This shortcoming,
amplified by the limited availability of expert annotated data, has been
hampering the deployment of such methods at a larger scale across modalities.
To address these issues, we propose M-GenSeg, a new semi-supervised generative
training strategy for cross-modality tumor segmentation on unpaired bi-modal
datasets. With the addition of known healthy images, an unsupervised objective
encourages the model to disentangling tumors from the background, which
parallels the segmentation task. Then, by teaching the model to convert images
across modalities, we leverage available pixel-level annotations from the
source modality to enable segmentation in the unannotated target modality. We
evaluated the performance on a brain tumor segmentation dataset composed of
four different contrast sequences from the public BraTS 2020 challenge data. We
report consistent improvement in Dice scores over state-of-the-art
domain-adaptive baselines on the unannotated target modality. Unlike the prior
art, M-GenSeg also introduces the ability to train with a partially annotated
source modality.
Related papers
- Generalizable Single-Source Cross-modality Medical Image Segmentation via Invariant Causal Mechanisms [16.699205051836657]
Single-source domain generalization aims to learn a model from a single source domain that can generalize well on unseen target domains.
This is an important task in computer vision, particularly relevant to medical imaging where domain shifts are common.
We combine causality-inspired theoretical insights on learning domain-invariant representations with recent advancements in diffusion-based augmentation to improve generalization across diverse imaging modalities.
arXiv Detail & Related papers (2024-11-07T22:35:17Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Image-level supervision and self-training for transformer-based
cross-modality tumor segmentation [2.29206349318258]
We propose a new semi-supervised training strategy called MoDATTS.
MoDATTS is designed for accurate cross-modality 3D tumor segmentation on unpaired bi-modal datasets.
We report that 99% and 100% of this maximum performance can be attained if 20% and 50% of the target data is annotated.
arXiv Detail & Related papers (2023-09-17T11:50:12Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Cross-modal tumor segmentation using generative blending augmentation and self training [1.6440045168835438]
We propose a cross-modal segmentation method based on conventional image synthesis boosted by a new data augmentation technique.
Generative Blending Augmentation (GBA) learns representative generative features from a single training image to realistically diversify tumor appearances.
The proposed solution ranked first for vestibular schwannoma (VS) segmentation during the validation and test phases of the MICCAI CrossMoDA 2022 challenge.
arXiv Detail & Related papers (2023-04-04T11:01:46Z) - Learning Multi-Modal Brain Tumor Segmentation from Privileged
Semi-Paired MRI Images with Curriculum Disentanglement Learning [4.43142018105102]
We present a novel two-step (intra-modality and inter-modality) curriculum disentanglement learning framework for brain tumor segmentation.
In the first step, we propose to conduct reconstruction and segmentation with augmented intra-modality style-consistent images.
In the second step, the model jointly performs reconstruction, unsupervised/supervised translation, and segmentation for both unpaired and paired inter-modality images.
arXiv Detail & Related papers (2022-08-26T16:52:43Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain
Adaptation [9.659642285903418]
Cross-modality synthesis of medical images to reduce the costly annotation burden by radiologists.
We present a novel approach for image-to-image translation in medical images, capable of supervised or unsupervised (unpaired image data) setups.
arXiv Detail & Related papers (2021-03-05T16:22:31Z) - Shape-aware Meta-learning for Generalizing Prostate MRI Segmentation to
Unseen Domains [68.73614619875814]
We present a novel shape-aware meta-learning scheme to improve the model generalization in prostate MRI segmentation.
Experimental results show that our approach outperforms many state-of-the-art generalization methods consistently across all six settings of unseen domains.
arXiv Detail & Related papers (2020-07-04T07:56:02Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.