An Annotation-free Restoration Network for Cataractous Fundus Images
- URL: http://arxiv.org/abs/2203.07737v1
- Date: Tue, 15 Mar 2022 09:11:48 GMT
- Title: An Annotation-free Restoration Network for Cataractous Fundus Images
- Authors: Heng Li, Haofeng Liu, Yan Hu, Huazhu Fu, Yitian Zhao, Hanpei Miao,
Jiang Liu
- Abstract summary: Restoration algorithms are developed to improve the readability of cataract fundus images.
The requirement of annotation limits the application of these algorithms in clinics.
This paper proposes a network to annotation-freely restore cataractous fundus images (ArcNet)
- Score: 33.05266438479094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cataracts are the leading cause of vision loss worldwide. Restoration
algorithms are developed to improve the readability of cataract fundus images
in order to increase the certainty in diagnosis and treatment for cataract
patients. Unfortunately, the requirement of annotation limits the application
of these algorithms in clinics. This paper proposes a network to
annotation-freely restore cataractous fundus images (ArcNet) so as to boost the
clinical practicability of restoration. Annotations are unnecessary in ArcNet,
where the high-frequency component is extracted from fundus images to replace
segmentation in the preservation of retinal structures. The restoration model
is learned from the synthesized images and adapted to real cataract images.
Extensive experiments are implemented to verify the performance and
effectiveness of ArcNet. Favorable performance is achieved using ArcNet against
state-of-the-art algorithms, and the diagnosis of ocular fundus diseases in
cataract patients is promoted by ArcNet. The capability of properly restoring
cataractous images in the absence of annotated data promises the proposed
algorithm outstanding clinical practicability.
Related papers
- Structure-consistent Restoration Network for Cataract Fundus Image
Enhancement [33.000927682799016]
Fundus photography is a routine examination in clinics to diagnose and monitor ocular diseases.
For cataract patients, the fundus image always suffers quality degradation caused by the clouding lens.
To improve the certainty in clinical diagnosis, restoration algorithms have been proposed to enhance the quality of fundus images.
arXiv Detail & Related papers (2022-06-09T02:32:33Z) - RFormer: Transformer-based Generative Adversarial Network for Real
Fundus Image Restoration on A New Clinical Benchmark [8.109057397954537]
Ophthalmologists have used fundus images to screen and diagnose eye diseases.
Low-quality (LQ) degraded fundus images easily lead to uncertainty in clinical screening and generally increase the risk of misdiagnosis.
We propose a novel Transformer-based Generative Adversarial Network (RFormer) to restore the real degradation of clinical fundus images.
arXiv Detail & Related papers (2022-01-03T03:56:58Z) - Artifact Reduction in Fundus Imaging using Cycle Consistent Adversarial
Neural Networks [0.0]
Deep learning is a powerful tool to extract patterns from data without much human intervention.
An attempt has been made to automatically rectify such artifacts present in the images of the fundus.
We use a CycleGAN based model which consists of residual blocks to reduce the artifacts in the images.
arXiv Detail & Related papers (2021-12-25T18:05:48Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - MTCD: Cataract Detection via Near Infrared Eye Images [69.62768493464053]
cataract is a common eye disease and one of the leading causes of blindness and vision impairment.
We present a novel algorithm for cataract detection using near-infrared eye images.
Deep learning-based eye segmentation and multitask network classification networks are presented.
arXiv Detail & Related papers (2021-10-06T08:10:28Z) - NuI-Go: Recursive Non-Local Encoder-Decoder Network for Retinal Image
Non-Uniform Illumination Removal [96.12120000492962]
The quality of retinal images is often clinically unsatisfactory due to eye lesions and imperfect imaging process.
One of the most challenging quality degradation issues in retinal images is non-uniform illumination.
We propose a non-uniform illumination removal network for retinal image, called NuI-Go.
arXiv Detail & Related papers (2020-08-07T04:31:33Z) - Neural Sparse Representation for Image Restoration [116.72107034624344]
Inspired by the robustness and efficiency of sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
Our method structurally enforces sparsity constraints upon hidden neurons.
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks.
arXiv Detail & Related papers (2020-06-08T05:15:17Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z) - Dense Residual Network for Retinal Vessel Segmentation [8.778525346264466]
We propose an efficient method to segment blood vessels in Scanning Laser Ophthalmoscopy retinal images.
Inspired by U-Net, "feature map reuse" and residual learning, we propose a deep dense residual network structure called DRNet.
Our method achieves the state-of-the-art performance even without data augmentation.
arXiv Detail & Related papers (2020-04-07T20:42:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.