Improved HER2 Tumor Segmentation with Subtype Balancing using Deep
Generative Networks
- URL: http://arxiv.org/abs/2211.06150v1
- Date: Fri, 11 Nov 2022 12:05:15 GMT
- Title: Improved HER2 Tumor Segmentation with Subtype Balancing using Deep
Generative Networks
- Authors: Mathias \"Ottl, Jana M\"onius, Matthias R\"ubner, Carol I. Geppert,
Jingna Qiu, Frauke Wilm, Arndt Hartmann, Matthias W. Beckmann, Peter A.
Fasching, Andreas Maier, Ramona Erber, Katharina Breininger
- Abstract summary: We propose to create synthetic images with semantically-conditioned deep generative networks.
We show the suitability of Generative Adversarial Networks (GANs) and especially diffusion models to create realistic images.
- Score: 5.44130112878356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tumor segmentation in histopathology images is often complicated by its
composition of different histological subtypes and class imbalance.
Oversampling subtypes with low prevalence features is not a satisfactory
solution since it eventually leads to overfitting. We propose to create
synthetic images with semantically-conditioned deep generative networks and to
combine subtype-balanced synthetic images with the original dataset to achieve
better segmentation performance. We show the suitability of Generative
Adversarial Networks (GANs) and especially diffusion models to create realistic
images based on subtype-conditioning for the use case of HER2-stained
histopathology. Additionally, we show the capability of diffusion models to
conditionally inpaint HER2 tumor areas with modified subtypes. Combining the
original dataset with the same amount of diffusion-generated images increased
the tumor Dice score from 0.833 to 0.854 and almost halved the variance between
the HER2 subtype recalls. These results create the basis for more reliable
automatic HER2 analysis with lower performance variance between individual HER2
subtypes.
Related papers
- Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI [58.809276442508256]
We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-11T15:46:00Z) - An Organism Starts with a Single Pix-Cell: A Neural Cellular Diffusion for High-Resolution Image Synthesis [8.01395073111961]
We introduce a novel family of models termed Generative Cellular Automata (GeCA)
GeCAs are evaluated as an effective augmentation tool for retinal disease classification across two imaging modalities: Fundus and Optical Coherence Tomography ( OCT)
In the context of OCT imaging, where data is scarce and the distribution of classes is inherently skewed, GeCA significantly boosts the performance of 11 different ophthalmological conditions.
arXiv Detail & Related papers (2024-07-03T11:26:09Z) - Segmentation of Non-Small Cell Lung Carcinomas: Introducing DRU-Net and Multi-Lens Distortion [0.1935997508026988]
We are proposing a segmentation model (DRU-Net) that can provide a delineation of human non-small cell lung carcinomas.
We have used two datasets (Norwegian Lung Cancer Biobank and Haukeland University Hospital lung cancer cohort) to create our proposed model.
The proposed spatial augmentation method (multi-lens distortion) improved the network performance by 3%.
arXiv Detail & Related papers (2024-06-20T13:14:00Z) - Semantic Image Synthesis for Abdominal CT [14.808000433125523]
In this work, we explore semantic image synthesis for abdominal CT using conditional diffusion models.
Experimental results demonstrated that diffusion models were able to synthesize abdominal CT images with better quality.
arXiv Detail & Related papers (2023-12-11T15:39:41Z) - Adaptive Input-image Normalization for Solving the Mode Collapse Problem in GAN-based X-ray Images [0.08192907805418582]
This work contributes an empirical demonstration of the benefits of integrating the adaptive input-image normalization with the Deep Conversaal GAN and Auxiliary GAN to alleviate the mode collapse problems.
Results demonstrate that the DCGAN and the ACGAN with adaptive input-image normalization outperform the DCGAN and ACGAN with un-normalized X-ray images.
arXiv Detail & Related papers (2023-09-21T16:43:29Z) - Optimizing Sampling Patterns for Compressed Sensing MRI with Diffusion
Generative Models [75.52575380824051]
We present a learning method to optimize sub-sampling patterns for compressed sensing multi-coil MRI.
We use a single-step reconstruction based on the posterior mean estimate given by the diffusion model and the MRI measurement process.
Our method requires as few as five training images to learn effective sampling patterns.
arXiv Detail & Related papers (2023-06-05T22:09:06Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - Your Diffusion Model is Secretly a Zero-Shot Classifier [90.40799216880342]
We show that density estimates from large-scale text-to-image diffusion models can be leveraged to perform zero-shot classification.
Our generative approach to classification attains strong results on a variety of benchmarks.
Our results are a step toward using generative over discriminative models for downstream tasks.
arXiv Detail & Related papers (2023-03-28T17:59:56Z) - SinDiffusion: Learning a Diffusion Model from a Single Natural Image [159.4285444680301]
We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image.
It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales.
Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics.
arXiv Detail & Related papers (2022-11-22T18:00:03Z) - Unifying Diffusion Models' Latent Space, with Applications to
CycleDiffusion and Guidance [95.12230117950232]
We show that a common latent space emerges from two diffusion models trained independently on related domains.
Applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors.
arXiv Detail & Related papers (2022-10-11T15:53:52Z) - Spatiotemporal Feature Learning Based on Two-Step LSTM and Transformer
for CT Scans [2.3682456328966115]
We propose a novel, effective, two-step-wise approach to tickle this issue for COVID-19 symptom classification thoroughly.
First, the semantic feature embedding of each slice for a CT scan is extracted by conventional backbone networks.
Then, we proposed a long short-term memory (LSTM) and Transformer-based sub-network to deal with temporal feature learning.
arXiv Detail & Related papers (2022-07-04T16:59:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.