UWAT-GAN: Fundus Fluorescein Angiography Synthesis via Ultra-wide-angle
Transformation Multi-scale GAN
- URL: http://arxiv.org/abs/2307.11530v1
- Date: Fri, 21 Jul 2023 12:23:39 GMT
- Title: UWAT-GAN: Fundus Fluorescein Angiography Synthesis via Ultra-wide-angle
Transformation Multi-scale GAN
- Authors: Zhaojie Fang, Zhanghao Chen, Pengxue Wei, Wangting Li, Shaochong
Zhang, Ahmed Elazab, Gangyong Jia, Ruiquan Ge, Changmiao Wang
- Abstract summary: Fundus photography is an essential examination for clinical and differential diagnosis of fundus diseases.
Current methods in fundus imaging could not produce high-resolution images and are unable to capture tiny vascular lesion areas.
This paper proposes a novel conditional generative adversarial network (UWAT-GAN) to synthesize UWF-FA from UWF-SLO.
- Score: 1.165405976310311
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fundus photography is an essential examination for clinical and differential
diagnosis of fundus diseases. Recently, Ultra-Wide-angle Fundus (UWF)
techniques, UWF Fluorescein Angiography (UWF-FA) and UWF Scanning Laser
Ophthalmoscopy (UWF-SLO) have been gradually put into use. However, Fluorescein
Angiography (FA) and UWF-FA require injecting sodium fluorescein which may have
detrimental influences. To avoid negative impacts, cross-modality medical image
generation algorithms have been proposed. Nevertheless, current methods in
fundus imaging could not produce high-resolution images and are unable to
capture tiny vascular lesion areas. This paper proposes a novel conditional
generative adversarial network (UWAT-GAN) to synthesize UWF-FA from UWF-SLO.
Using multi-scale generators and a fusion module patch to better extract global
and local information, our model can generate high-resolution images. Moreover,
an attention transmit module is proposed to help the decoder learn effectively.
Besides, a supervised approach is used to train the network using multiple new
weighted losses on different scales of data. Experiments on an in-house UWF
image dataset demonstrate the superiority of the UWAT-GAN over the
state-of-the-art methods. The source code is available at:
https://github.com/Tinysqua/UWAT-GAN.
Related papers
- UWAFA-GAN: Ultra-Wide-Angle Fluorescein Angiography Transformation via Multi-scale Generation and Registration Enhancement [17.28459176559761]
UWF fluorescein angiography (UWF-FA) requires the administration of a fluorescent dye via injection into the patient's hand or elbow.
To mitigate potential adverse effects associated with injections, researchers have proposed the development of cross-modality medical image generation algorithms.
We introduce a novel conditional generative adversarial network (UWAFA-GAN) to synthesize UWF-FA from UWF-SLO.
arXiv Detail & Related papers (2024-05-01T14:27:43Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - Denoising Diffusion Models for Plug-and-Play Image Restoration [135.6359475784627]
This paper proposes DiffPIR, which integrates the traditional plug-and-play method into the diffusion sampling framework.
Compared to plug-and-play IR methods that rely on discriminative Gaussian denoisers, DiffPIR is expected to inherit the generative ability of diffusion models.
arXiv Detail & Related papers (2023-05-15T20:24:38Z) - VTGAN: Semi-supervised Retinal Image Synthesis and Disease Prediction
using Vision Transformers [0.0]
In Fluorescein Angiography (FA), an injected dye is injected in the bloodstream to image the vascular structure of the retina.
Fundus imaging is a non-invasive technique used for photographing the retina but does not have sufficient fidelity for capturing its vascular structure.
We propose a novel conditional generative adversarial network (GAN) capable of simultaneously synthesizing FA images from fundus photographs while predicting retinal degeneration.
arXiv Detail & Related papers (2021-04-14T10:32:36Z) - Leveraging Regular Fundus Images for Training UWF Fundus Diagnosis
Models via Adversarial Learning and Pseudo-Labeling [29.009663623719064]
Ultra-widefield (UWF) 200degreefundus imaging by Optos cameras has gradually been introduced.
Regular fundus images contain a large amount of high-quality and well-annotated data.
Due to the domain gap, models trained by regular fundus images to recognize UWF fundus images perform poorly.
We propose the use of a modified cycle generative adversarial network (CycleGAN) model to bridge the gap between regular and UWF fundus.
arXiv Detail & Related papers (2020-11-27T16:25:30Z) - Attention2AngioGAN: Synthesizing Fluorescein Angiography from Retinal
Fundus Images using Generative Adversarial Networks [0.0]
Fluorescein Angiography (FA) is a technique that employs the designated camera for Fundus photography incorporating excitation and barrier filters.
FA also requires fluorescein dye that is injected intravenously, which might cause adverse effects ranging from nausea, vomiting to even fatal anaphylaxis.
We introduce an Attention-based Generative network that can synthesize Fluorescein Angiography from Fundus images.
arXiv Detail & Related papers (2020-07-17T18:58:44Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z) - Fundus2Angio: A Conditional GAN Architecture for Generating Fluorescein
Angiography Images from Retinal Fundus Photography [0.0]
There are no non-invasive systems capable of generating Fluorescein Angiography images.
Fundus photography is a non-invasive imaging technique that can be completed in a few seconds.
We propose a conditional generative adversarial network (GAN) to translate fundus images to FA images.
arXiv Detail & Related papers (2020-05-11T17:09:29Z) - Multifold Acceleration of Diffusion MRI via Slice-Interleaved Diffusion
Encoding (SIDE) [50.65891535040752]
We propose a diffusion encoding scheme, called Slice-Interleaved Diffusion.
SIDE, that interleaves each diffusion-weighted (DW) image volume with slices encoded with different diffusion gradients.
We also present a method based on deep learning for effective reconstruction of DW images from the highly slice-undersampled data.
arXiv Detail & Related papers (2020-02-25T14:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.