Retinal OCT Synthesis with Denoising Diffusion Probabilistic Models for
Layer Segmentation
- URL: http://arxiv.org/abs/2311.05479v2
- Date: Wed, 6 Mar 2024 19:33:33 GMT
- Title: Retinal OCT Synthesis with Denoising Diffusion Probabilistic Models for
Layer Segmentation
- Authors: Yuli Wu, Weidong He, Dennis Eschweiler, Ningxin Dou, Zixin Fan,
Shengli Mi, Peter Walter, Johannes Stegmaier
- Abstract summary: We propose an image synthesis method that utilizes denoising diffusion probabilistic models (DDPMs) to automatically generate retinal optical coherence tomography ( OCT) images.
We observe a consistent improvement in layer segmentation accuracy, which is validated using various neural networks.
These findings demonstrate the promising potential of DDPMs in reducing the need for manual annotations of retinal OCT images.
- Score: 2.4113205575263708
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern biomedical image analysis using deep learning often encounters the
challenge of limited annotated data. To overcome this issue, deep generative
models can be employed to synthesize realistic biomedical images. In this
regard, we propose an image synthesis method that utilizes denoising diffusion
probabilistic models (DDPMs) to automatically generate retinal optical
coherence tomography (OCT) images. By providing rough layer sketches, the
trained DDPMs can generate realistic circumpapillary OCT images. We further
find that more accurate pseudo labels can be obtained through knowledge
adaptation, which greatly benefits the segmentation task. Through this, we
observe a consistent improvement in layer segmentation accuracy, which is
validated using various neural networks. Furthermore, we have discovered that a
layer segmentation model trained solely with synthesized images can achieve
comparable results to a model trained exclusively with real images. These
findings demonstrate the promising potential of DDPMs in reducing the need for
manual annotations of retinal OCT images.
Related papers
- Neurovascular Segmentation in sOCT with Deep Learning and Synthetic Training Data [4.5276169699857505]
This study demonstrates a synthesis engine for neurovascular segmentation in serial-section optical coherence tomography images.
Our approach comprises two phases: label synthesis and label-to-image transformation.
We demonstrate the efficacy of the former by comparing it to several more realistic sets of training labels, and the latter by an ablation study of synthetic noise and artifact models.
arXiv Detail & Related papers (2024-07-01T16:09:07Z) - Multi-Branch Generative Models for Multichannel Imaging with an Application to PET/CT Synergistic Reconstruction [42.95604565673447]
This paper presents a novel approach for learned synergistic reconstruction of medical images using multi-branch generative models.
We demonstrate the efficacy of our approach on both Modified National Institute of Standards and Technology (MNIST) and positron emission tomography (PET)/ computed tomography (CT) datasets.
arXiv Detail & Related papers (2024-04-12T18:21:08Z) - Paired Diffusion: Generation of related, synthetic PET-CT-Segmentation scans using Linked Denoising Diffusion Probabilistic Models [0.0]
This research introduces a novel architecture that is able to generate multiple, related PET-CT-tumour mask pairs using paired networks and conditional encoders.
Our approach includes innovative, time step-controlled mechanisms and a noise-seeding' strategy to improve DDPM sampling consistency.
arXiv Detail & Related papers (2024-03-26T14:21:49Z) - Synthetic optical coherence tomography angiographs for detailed retinal
vessel segmentation without human annotations [12.571349114534597]
We present a lightweight simulation of the retinal vascular network based on space colonization for faster and more realistic OCTA synthesis.
We demonstrate the superior segmentation performance of our approach in extensive quantitative and qualitative experiments on three public datasets.
arXiv Detail & Related papers (2023-06-19T14:01:47Z) - CS$^2$: A Controllable and Simultaneous Synthesizer of Images and
Annotations with Minimal Human Intervention [3.465671939864428]
We propose a novel controllable and simultaneous synthesizer (dubbed CS$2$) to generate both realistic images and corresponding annotations at the same time.
Our contributions include 1) a conditional image synthesis network that receives both style information from reference CT images and structural information from unsupervised segmentation masks, and 2) a corresponding segmentation mask network to automatically segment these synthesized images simultaneously.
arXiv Detail & Related papers (2022-06-20T15:09:10Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Data-driven generation of plausible tissue geometries for realistic
photoacoustic image synthesis [53.65837038435433]
Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties.
We propose a novel approach to PAT data simulation, which we refer to as "learning to simulate"
We leverage the concept of Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data to generate plausible tissue geometries.
arXiv Detail & Related papers (2021-03-29T11:30:18Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.