RFormer: Transformer-based Generative Adversarial Network for Real
Fundus Image Restoration on A New Clinical Benchmark
- URL: http://arxiv.org/abs/2201.00466v1
- Date: Mon, 3 Jan 2022 03:56:58 GMT
- Title: RFormer: Transformer-based Generative Adversarial Network for Real
Fundus Image Restoration on A New Clinical Benchmark
- Authors: Zhuo Deng, Yuanhao Cai, Lu Chen, Zheng Gong, Qiqi Bao, Xue Yao, Dong
Fang, Shaochong Zhang, Lan Ma
- Abstract summary: Ophthalmologists have used fundus images to screen and diagnose eye diseases.
Low-quality (LQ) degraded fundus images easily lead to uncertainty in clinical screening and generally increase the risk of misdiagnosis.
We propose a novel Transformer-based Generative Adversarial Network (RFormer) to restore the real degradation of clinical fundus images.
- Score: 8.109057397954537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ophthalmologists have used fundus images to screen and diagnose eye diseases.
However, different equipments and ophthalmologists pose large variations to the
quality of fundus images. Low-quality (LQ) degraded fundus images easily lead
to uncertainty in clinical screening and generally increase the risk of
misdiagnosis. Thus, real fundus image restoration is worth studying.
Unfortunately, real clinical benchmark has not been explored for this task so
far. In this paper, we investigate the real clinical fundus image restoration
problem. Firstly, We establish a clinical dataset, Real Fundus (RF), including
120 low- and high-quality (HQ) image pairs. Then we propose a novel
Transformer-based Generative Adversarial Network (RFormer) to restore the real
degradation of clinical fundus images. The key component in our network is the
Window-based Self-Attention Block (WSAB) which captures non-local
self-similarity and long-range dependencies. To produce more visually pleasant
results, a Transformer-based discriminator is introduced. Extensive experiments
on our clinical benchmark show that the proposed RFormer significantly
outperforms the state-of-the-art (SOTA) methods. In addition, experiments of
downstream tasks such as vessel segmentation and optic disc/cup detection
demonstrate that our proposed RFormer benefits clinical fundus image analysis
and applications. The dataset, code, and models will be released.
Related papers
- Step-Calibrated Diffusion for Biomedical Optical Image Restoration [47.191704042917394]
Restorative Step-Calibrated Diffusion (RSCD) is an unpaired image restoration method.
RSCD views the image restoration problem as completing the finishing steps of a diffusion-based image generation task.
RSCD outperforms other widely used unpaired image restoration methods on both image quality and perceptual evaluation metrics.
arXiv Detail & Related papers (2024-03-20T15:38:53Z) - Improving Classification of Retinal Fundus Image Using Flow Dynamics
Optimized Deep Learning Methods [0.0]
Diabetic Retinopathy (DR) refers to a barrier that takes place in diabetes mellitus damaging the blood vessel network present in the retina.
It can take some time to perform a DR diagnosis using color fundus pictures because experienced clinicians are required to identify the tumors in the imagery used to identify the illness.
arXiv Detail & Related papers (2023-04-29T16:11:34Z) - Cross-modulated Few-shot Image Generation for Colorectal Tissue
Classification [58.147396879490124]
Our few-shot generation method, named XM-GAN, takes one base and a pair of reference tissue images as input and generates high-quality yet diverse images.
To the best of our knowledge, we are the first to investigate few-shot generation in colorectal tissue images.
arXiv Detail & Related papers (2023-04-04T17:50:30Z) - Retinal Image Restoration and Vessel Segmentation using Modified
Cycle-CBAM and CBAM-UNet [0.7868449549351486]
A cycle-consistent generative adversarial network (CycleGAN) with a convolution block attention module (CBAM) is used for retinal image restoration.
A modified UNet is used for retinal vessel segmentation for the restored retinal images.
The proposed method can significantly reduce the degradation effects caused by out-of-focus blurring, color distortion, low, high, and uneven illumination.
arXiv Detail & Related papers (2022-09-09T10:47:20Z) - Automated SSIM Regression for Detection and Quantification of Motion
Artefacts in Brain MR Images [54.739076152240024]
Motion artefacts in magnetic resonance brain images are a crucial issue.
The assessment of MR image quality is fundamental before proceeding with the clinical diagnosis.
An automated image quality assessment based on the structural similarity index (SSIM) regression has been proposed here.
arXiv Detail & Related papers (2022-06-14T10:16:54Z) - Structure-consistent Restoration Network for Cataract Fundus Image
Enhancement [33.000927682799016]
Fundus photography is a routine examination in clinics to diagnose and monitor ocular diseases.
For cataract patients, the fundus image always suffers quality degradation caused by the clouding lens.
To improve the certainty in clinical diagnosis, restoration algorithms have been proposed to enhance the quality of fundus images.
arXiv Detail & Related papers (2022-06-09T02:32:33Z) - Malignancy Prediction and Lesion Identification from Clinical
Dermatological Images [65.1629311281062]
We consider machine-learning-based malignancy prediction and lesion identification from clinical dermatological images.
We first identify all lesions present in the image regardless of sub-type or likelihood of malignancy, then it estimates their likelihood of malignancy, and through aggregation, it also generates an image-level likelihood of malignancy.
arXiv Detail & Related papers (2021-04-02T20:52:05Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z) - Retinopathy of Prematurity Stage Diagnosis Using Object Segmentation and
Convolutional Neural Networks [68.96150598294072]
Retinopathy of Prematurity (ROP) is an eye disorder primarily affecting premature infants with lower weights.
It causes proliferation of vessels in the retina and could result in vision loss and, eventually, retinal detachment, leading to blindness.
In recent years, there has been a significant effort to automate the diagnosis using deep learning.
This paper builds upon the success of previous models and develops a novel architecture, which combines object segmentation and convolutional neural networks (CNN)
Our proposed system first trains an object segmentation model to identify the demarcation line at a pixel level and adds the resulting mask as an additional "color" channel in
arXiv Detail & Related papers (2020-04-03T14:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.