Improving CT Image Segmentation Accuracy Using StyleGAN Driven Data
Augmentation
- URL: http://arxiv.org/abs/2302.03285v1
- Date: Tue, 7 Feb 2023 06:34:10 GMT
- Title: Improving CT Image Segmentation Accuracy Using StyleGAN Driven Data
Augmentation
- Authors: Soham Bhosale, Arjun Krishna, Ge Wang, Klaus Mueller
- Abstract summary: This paper presents a StyleGAN-driven approach for segmenting publicly available large medical datasets.
Style transfer is used to augment the training dataset and generate new anatomically sound images.
The augmented dataset is then used to train a U-Net segmentation network which displays a significant improvement in the segmentation accuracy.
- Score: 42.034896915716374
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Medical Image Segmentation is a useful application for medical image analysis
including detecting diseases and abnormalities in imaging modalities such as
MRI, CT etc. Deep learning has proven to be promising for this task but usually
has a low accuracy because of the lack of appropriate publicly available
annotated or segmented medical datasets. In addition, the datasets that are
available may have a different texture because of different dosage values or
scanner properties than the images that need to be segmented. This paper
presents a StyleGAN-driven approach for segmenting publicly available large
medical datasets by using readily available extremely small annotated datasets
in similar modalities. The approach involves augmenting the small segmented
dataset and eliminating texture differences between the two datasets. The
dataset is augmented by being passed through six different StyleGANs that are
trained on six different style images taken from the large non-annotated
dataset we want to segment. Specifically, style transfer is used to augment the
training dataset. The annotations of the training dataset are hence combined
with the textures of the non-annotated dataset to generate new anatomically
sound images. The augmented dataset is then used to train a U-Net segmentation
network which displays a significant improvement in the segmentation accuracy
in segmenting the large non-annotated dataset.
Related papers
- Pseudo Label-Guided Data Fusion and Output Consistency for
Semi-Supervised Medical Image Segmentation [9.93871075239635]
We propose the PLGDF framework, which builds upon the mean teacher network for segmenting medical images with less annotation.
We propose a novel pseudo-label utilization scheme, which combines labeled and unlabeled data to augment the dataset effectively.
Our framework yields superior performance compared to six state-of-the-art semi-supervised learning methods.
arXiv Detail & Related papers (2023-11-17T06:36:43Z) - DatasetDM: Synthesizing Data with Perception Annotations Using Diffusion
Models [61.906934570771256]
We present a generic dataset generation model that can produce diverse synthetic images and perception annotations.
Our method builds upon the pre-trained diffusion model and extends text-guided image synthesis to perception data generation.
We show that the rich latent code of the diffusion model can be effectively decoded as accurate perception annotations using a decoder module.
arXiv Detail & Related papers (2023-08-11T14:38:11Z) - Unsupervised Domain Adaptation for Medical Image Segmentation via
Feature-space Density Matching [0.0]
This paper presents an unsupervised domain adaptation approach for semantic segmentation.
We match the target data distribution to the source in the feature space, particularly when the number of target samples is limited.
We demonstrate the efficacy of our proposed approach on 2 datasets, multisite prostate MRI and histopathology images.
arXiv Detail & Related papers (2023-05-09T22:24:46Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - A new dataset for measuring the performance of blood vessel segmentation methods under distribution shifts [0.0]
VessMAP is a heterogeneous blood vessel segmentation dataset acquired by carefully sampling relevant images from a larger non-annotated dataset.
A methodology was developed to select both prototypical and atypical samples from the base dataset.
To demonstrate the potential of the new dataset, we show that the validation performance of a neural network changes significantly depending on the splits used for training the network.
arXiv Detail & Related papers (2023-01-11T15:31:15Z) - Unsupervised Domain Adaptation with Histogram-gated Image Translation
for Delayered IC Image Analysis [2.720699926154399]
Histogram-gated Image Translation (HGIT) is an unsupervised domain adaptation framework which transforms images from a given source dataset to the domain of a target dataset.
Our method achieves the best performance compared to the reported domain adaptation techniques, and is also reasonably close to the fully supervised benchmark.
arXiv Detail & Related papers (2022-09-27T15:53:22Z) - Self-Supervised Generative Style Transfer for One-Shot Medical Image
Segmentation [10.634870214944055]
In medical image segmentation, supervised deep networks' success comes at the cost of requiring abundant labeled data.
We propose a novel volumetric self-supervised learning for data augmentation capable of synthesizing volumetric image-segmentation pairs.
Our work's central tenet benefits from a combined view of one-shot generative learning and the proposed self-supervised training strategy.
arXiv Detail & Related papers (2021-10-05T15:28:42Z) - Personalized Image Semantic Segmentation [58.980245748434]
We generate more accurate segmentation results on unlabeled personalized images by investigating the data's personalized traits.
We propose a baseline method that incorporates the inter-image context when segmenting certain images.
The code and the PIS dataset will be made publicly available.
arXiv Detail & Related papers (2021-07-24T04:03:11Z) - Learning from Partially Overlapping Labels: Image Segmentation under
Annotation Shift [68.6874404805223]
We propose several strategies for learning from partially overlapping labels in the context of abdominal organ segmentation.
We find that combining a semi-supervised approach with an adaptive cross entropy loss can successfully exploit heterogeneously annotated data.
arXiv Detail & Related papers (2021-07-13T09:22:24Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.