Retinal Image Restoration using Transformer and Cycle-Consistent
Generative Adversarial Network
- URL: http://arxiv.org/abs/2303.01939v1
- Date: Fri, 3 Mar 2023 14:10:47 GMT
- Title: Retinal Image Restoration using Transformer and Cycle-Consistent
Generative Adversarial Network
- Authors: Alnur Alimanov and Md Baharul Islam
- Abstract summary: Medical imaging plays a significant role in detecting and treating various diseases.
We propose a retinal image enhancement method using a vision transformer and convolutional neural network.
- Score: 0.7868449549351486
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Medical imaging plays a significant role in detecting and treating various
diseases. However, these images often happen to be of too poor quality, leading
to decreased efficiency, extra expenses, and even incorrect diagnoses.
Therefore, we propose a retinal image enhancement method using a vision
transformer and convolutional neural network. It builds a cycle-consistent
generative adversarial network that relies on unpaired datasets. It consists of
two generators that translate images from one domain to another (e.g., low- to
high-quality and vice versa), playing an adversarial game with two
discriminators. Generators produce indistinguishable images for discriminators
that predict the original images from generated ones. Generators are a
combination of vision transformer (ViT) encoder and convolutional neural
network (CNN) decoder. Discriminators include traditional CNN encoders. The
resulting improved images have been tested quantitatively using such evaluation
metrics as peak signal-to-noise ratio (PSNR), structural similarity index
measure (SSIM), and qualitatively, i.e., vessel segmentation. The proposed
method successfully reduces the adverse effects of blurring, noise,
illumination disturbances, and color distortions while significantly preserving
structural and color information. Experimental results show the superiority of
the proposed method. Our testing PSNR is 31.138 dB for the first and 27.798 dB
for the second dataset. Testing SSIM is 0.919 and 0.904, respectively.
Related papers
- StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - GAN-driven Electromagnetic Imaging of 2-D Dielectric Scatterers [4.510838705378781]
Inverse scattering problems are inherently challenging, given the fact they are ill-posed and nonlinear.
This paper presents a powerful deep learning-based approach that relies on generative adversarial networks.
A cohesive inverse neural network (INN) framework is set up comprising a sequence of appropriately designed dense layers.
The trained INN demonstrates an enhanced robustness, evidenced by a mean binary cross-entropy (BCE) loss of $0.13$ and a structure similarity index (SSI) of $0.90$.
arXiv Detail & Related papers (2024-02-16T17:03:08Z) - DopUS-Net: Quality-Aware Robotic Ultrasound Imaging based on Doppler
Signal [48.97719097435527]
DopUS-Net combines the Doppler images with B-mode images to increase the segmentation accuracy and robustness of small blood vessels.
An artery re-identification module qualitatively evaluate the real-time segmentation results and automatically optimize the probe pose for enhanced Doppler images.
arXiv Detail & Related papers (2023-05-15T18:19:29Z) - What can we learn about a generated image corrupting its latent
representation? [57.1841740328509]
We investigate the hypothesis that we can predict image quality based on its latent representation in the GANs bottleneck.
We achieve this by corrupting the latent representation with noise and generating multiple outputs.
arXiv Detail & Related papers (2022-10-12T14:40:32Z) - Retinal Image Restoration and Vessel Segmentation using Modified
Cycle-CBAM and CBAM-UNet [0.7868449549351486]
A cycle-consistent generative adversarial network (CycleGAN) with a convolution block attention module (CBAM) is used for retinal image restoration.
A modified UNet is used for retinal vessel segmentation for the restored retinal images.
The proposed method can significantly reduce the degradation effects caused by out-of-focus blurring, color distortion, low, high, and uneven illumination.
arXiv Detail & Related papers (2022-09-09T10:47:20Z) - Unsupervised Denoising of Optical Coherence Tomography Images with
Dual_Merged CycleWGAN [3.3909577600092122]
We propose a new Cycle-Consistent Generative Adversarial Nets called Dual-Merged Cycle-WGAN for retinal OCT image denoiseing.
Our model consists of two Cycle-GAN networks with imporved generator, descriminator and wasserstein loss to achieve good training stability and better performance.
arXiv Detail & Related papers (2022-05-02T07:38:19Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Blind microscopy image denoising with a deep residual and multiscale
encoder/decoder network [0.0]
Deep multiscale convolutional encoder-decoder neural network is proposed.
The proposed model reaches on average 38.38 of PSNR and 0.98 of SSIM on a test set of 57458 images.
arXiv Detail & Related papers (2021-05-01T14:54:57Z) - Lesion Conditional Image Generation for Improved Segmentation of
Intracranial Hemorrhage from CT Images [0.0]
We present a lesion conditional Generative Adversarial Network LcGAN to generate synthetic Computed Tomography (CT) images for data augmentation.
A lesion conditional image (segmented mask) is an input to both the generator and the discriminator of the LcGAN during training.
We quantify the quality of the images by using a fully convolutional network (FCN) score and blurriness.
arXiv Detail & Related papers (2020-03-30T23:32:54Z) - Blur, Noise, and Compression Robust Generative Adversarial Networks [85.68632778835253]
We propose blur, noise, and compression robust GAN (BNCR-GAN) to learn a clean image generator directly from degraded images.
Inspired by NR-GAN, BNCR-GAN uses a multiple-generator model composed of image, blur- Kernel, noise, and quality-factor generators.
We demonstrate the effectiveness of BNCR-GAN through large-scale comparative studies on CIFAR-10 and a generality analysis on FFHQ.
arXiv Detail & Related papers (2020-03-17T17:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.