Optimal Transport Guided Unsupervised Learning for Enhancing low-quality
Retinal Images
- URL: http://arxiv.org/abs/2302.02991v1
- Date: Mon, 6 Feb 2023 18:29:30 GMT
- Title: Optimal Transport Guided Unsupervised Learning for Enhancing low-quality
Retinal Images
- Authors: Wenhui Zhu, Peijie Qiu, Mohammad Farazi, Keshav Nandakumar, Oana M.
Dumitrascu, Yalin Wang
- Abstract summary: Real-world non-mydriatic retinal fundus photography is prone to artifacts, imperfections and low-quality.
We propose a simple but effective end-to-end framework for enhancing poor-quality retinal fundus images.
- Score: 5.4240246179935845
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-world non-mydriatic retinal fundus photography is prone to artifacts,
imperfections and low-quality when certain ocular or systemic co-morbidities
exist. Artifacts may result in inaccuracy or ambiguity in clinical diagnoses.
In this paper, we proposed a simple but effective end-to-end framework for
enhancing poor-quality retinal fundus images. Leveraging the optimal transport
theory, we proposed an unpaired image-to-image translation scheme for
transporting low-quality images to their high-quality counterparts. We
theoretically proved that a Generative Adversarial Networks (GAN) model with a
generator and discriminator is sufficient for this task. Furthermore, to
mitigate the inconsistency of information between the low-quality images and
their enhancements, an information consistency mechanism was proposed to
maximally maintain structural consistency (optical discs, blood vessels,
lesions) between the source and enhanced domains. Extensive experiments were
conducted on the EyeQ dataset to demonstrate the superiority of our proposed
method perceptually and quantitatively.
Related papers
- Context-Aware Optimal Transport Learning for Retinal Fundus Image Enhancement [1.8339026473337505]
This paper proposes a context-informed optimal transport (OT) learning framework for tackling unpaired fundus image enhancement.
We derive the proposed context-aware OT using the earth's distance mover and show that the proposed context-OT has a solid theoretical guarantee.
Experimental results on a large-scale dataset demonstrate the superiority of the proposed method over several state-of-the-art supervised and unsupervised methods.
arXiv Detail & Related papers (2024-09-12T09:14:37Z) - StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - Bridging Synthetic and Real Images: a Transferable and Multiple
Consistency aided Fundus Image Enhancement Framework [61.74188977009786]
We propose an end-to-end optimized teacher-student framework to simultaneously conduct image enhancement and domain adaptation.
We also propose a novel multi-stage multi-attention guided enhancement network (MAGE-Net) as the backbones of our teacher and student network.
arXiv Detail & Related papers (2023-02-23T06:16:15Z) - OTRE: Where Optimal Transport Guided Unpaired Image-to-Image Translation
Meets Regularization by Enhancing [4.951748109810726]
Optimal retinal image quality is mandated for accurate medical diagnoses and automated analyses.
We propose an unpaired image-to-image translation scheme for mapping low-quality retinal CFPs to high-quality counterparts.
We validated the integrated framework, OTRE, on three publicly available retinal image datasets.
arXiv Detail & Related papers (2023-02-06T18:39:40Z) - Self-supervised Domain Adaptation for Breaking the Limits of Low-quality
Fundus Image Quality Enhancement [14.677912534121273]
Low-quality fundus images and style inconsistency potentially increase uncertainty in the diagnosis of fundus disease.
We formulate two self-supervised domain adaptation tasks to disentangle the features of image content, low-quality factor and style information.
Our DASQE method achieves new state-of-the-art performance when only low-quality images are available.
arXiv Detail & Related papers (2023-01-17T15:07:20Z) - Retinal Image Restoration and Vessel Segmentation using Modified
Cycle-CBAM and CBAM-UNet [0.7868449549351486]
A cycle-consistent generative adversarial network (CycleGAN) with a convolution block attention module (CBAM) is used for retinal image restoration.
A modified UNet is used for retinal vessel segmentation for the restored retinal images.
The proposed method can significantly reduce the degradation effects caused by out-of-focus blurring, color distortion, low, high, and uneven illumination.
arXiv Detail & Related papers (2022-09-09T10:47:20Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Malignancy Prediction and Lesion Identification from Clinical
Dermatological Images [65.1629311281062]
We consider machine-learning-based malignancy prediction and lesion identification from clinical dermatological images.
We first identify all lesions present in the image regardless of sub-type or likelihood of malignancy, then it estimates their likelihood of malignancy, and through aggregation, it also generates an image-level likelihood of malignancy.
arXiv Detail & Related papers (2021-04-02T20:52:05Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.