Cross-modal tumor segmentation using generative blending augmentation and self training
- URL: http://arxiv.org/abs/2304.01705v2
- Date: Fri, 29 Mar 2024 13:53:33 GMT
- Title: Cross-modal tumor segmentation using generative blending augmentation and self training
- Authors: Guillaume Sallé, Pierre-Henri Conze, Julien Bert, Nicolas Boussion, Dimitris Visvikis, Vincent Jaouen,
- Abstract summary: We propose a cross-modal segmentation method based on conventional image synthesis boosted by a new data augmentation technique.
Generative Blending Augmentation (GBA) learns representative generative features from a single training image to realistically diversify tumor appearances.
The proposed solution ranked first for vestibular schwannoma (VS) segmentation during the validation and test phases of the MICCAI CrossMoDA 2022 challenge.
- Score: 1.6440045168835438
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: \textit{Objectives}: Data scarcity and domain shifts lead to biased training sets that do not accurately represent deployment conditions. A related practical problem is cross-modal image segmentation, where the objective is to segment unlabelled images using previously labelled datasets from other imaging modalities. \textit{Methods}: We propose a cross-modal segmentation method based on conventional image synthesis boosted by a new data augmentation technique called Generative Blending Augmentation (GBA). GBA leverages a SinGAN model to learn representative generative features from a single training image to diversify realistically tumor appearances. This way, we compensate for image synthesis errors, subsequently improving the generalization power of a downstream segmentation model. The proposed augmentation is further combined to an iterative self-training procedure leveraging pseudo labels at each pass. \textit{Results}: The proposed solution ranked first for vestibular schwannoma (VS) segmentation during the validation and test phases of the MICCAI CrossMoDA 2022 challenge, with best mean Dice similarity and average symmetric surface distance measures. \textit{Conclusion and significance}: Local contrast alteration of tumor appearances and iterative self-training with pseudo labels are likely to lead to performance improvements in a variety of segmentation contexts.
Related papers
- Comprehensive Generative Replay for Task-Incremental Segmentation with Concurrent Appearance and Semantic Forgetting [49.87694319431288]
Generalist segmentation models are increasingly favored for diverse tasks involving various objects from different image sources.
We propose a Comprehensive Generative (CGR) framework that restores appearance and semantic knowledge by synthesizing image-mask pairs.
Experiments on incremental tasks (cardiac, fundus and prostate segmentation) show its clear advantage for alleviating concurrent appearance and semantic forgetting.
arXiv Detail & Related papers (2024-06-28T10:05:58Z) - SemFlow: Binding Semantic Segmentation and Image Synthesis via Rectified Flow [94.90853153808987]
We propose a unified diffusion-based framework (SemFlow) for semantic segmentation and semantic image synthesis.
As the training object is symmetric, samples belonging to the two distributions, images and semantic masks, can be effortlessly transferred reversibly.
Experiments show that our SemFlow achieves competitive results on semantic segmentation and semantic image synthesis tasks.
arXiv Detail & Related papers (2024-05-30T17:34:40Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for
Semi-supervised Polyp Segmentation [52.06525450636897]
Automatic polyp segmentation plays a crucial role in the early diagnosis and treatment of colorectal cancer.
Existing methods rely heavily on fully supervised training, which requires a large amount of labeled data with time-consuming pixel-wise annotations.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised polyp (DEC-Seg) from colonoscopy images.
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - A Simple and Robust Framework for Cross-Modality Medical Image
Segmentation applied to Vision Transformers [0.0]
We propose a simple framework to achieve fair image segmentation of multiple modalities using a single conditional model.
We show that our framework outperforms other cross-modality segmentation methods on the Multi-Modality Whole Heart Conditional Challenge.
arXiv Detail & Related papers (2023-10-09T09:51:44Z) - Enhanced Sharp-GAN For Histopathology Image Synthesis [63.845552349914186]
Histopathology image synthesis aims to address the data shortage issue in training deep learning approaches for accurate cancer detection.
We propose a novel approach that enhances the quality of synthetic images by using nuclei topology and contour regularization.
The proposed approach outperforms Sharp-GAN in all four image quality metrics on two datasets.
arXiv Detail & Related papers (2023-01-24T17:54:01Z) - M-GenSeg: Domain Adaptation For Target Modality Tumor Segmentation With
Annotation-Efficient Supervision [4.023899199756184]
M-GenSeg is a new semi-supervised generative training strategy for cross-modality tumor segmentation.
We evaluate the performance on a brain tumor segmentation dataset composed of four different contrast sequences.
Unlike the prior art, M-GenSeg also introduces the ability to train with a partially annotated source modality.
arXiv Detail & Related papers (2022-12-14T15:19:06Z) - Robust One-shot Segmentation of Brain Tissues via Image-aligned Style
Transformation [13.430851964063534]
We propose a novel image-aligned style transformation to reinforce the dual-model iterative learning for one-shot segmentation of brain tissues.
Experimental results on two public datasets demonstrate 1) a competitive segmentation performance of our method compared to the fully-supervised method, and 2) a superior performance over other state-of-the-art with an increase of average Dice by up to 4.67%.
arXiv Detail & Related papers (2022-11-26T09:14:01Z) - Image Segmentation with Adaptive Spatial Priors from Joint Registration [10.51970325349652]
In thigh muscle images, different muscles are packed together and there are often no clear boundaries between them.
We present a segmentation model with adaptive spatial priors from joint registration.
We evaluate our proposed model on synthetic and thigh muscle MR images.
arXiv Detail & Related papers (2022-03-29T13:29:59Z) - Segmentation-Renormalized Deep Feature Modulation for Unpaired Image
Harmonization [0.43012765978447565]
Cycle-consistent Generative Adversarial Networks have been used to harmonize image sets between a source and target domain.
These methods are prone to instability, contrast inversion, intractable manipulation of pathology, and steganographic mappings which limit their reliable adoption in real-world medical imaging.
We propose a segmentation-renormalized image translation framework to reduce inter-scanner harmonization while preserving anatomical layout.
arXiv Detail & Related papers (2021-02-11T23:53:51Z) - Adversarial Semantic Data Augmentation for Human Pose Estimation [96.75411357541438]
We propose Semantic Data Augmentation (SDA), a method that augments images by pasting segmented body parts with various semantic granularity.
We also propose Adversarial Semantic Data Augmentation (ASDA), which exploits a generative network to dynamiclly predict tailored pasting configuration.
State-of-the-art results are achieved on challenging benchmarks.
arXiv Detail & Related papers (2020-08-03T07:56:04Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.