Self-supervised Domain Adaptation for Breaking the Limits of Low-quality
Fundus Image Quality Enhancement
- URL: http://arxiv.org/abs/2301.06943v1
- Date: Tue, 17 Jan 2023 15:07:20 GMT
- Title: Self-supervised Domain Adaptation for Breaking the Limits of Low-quality
Fundus Image Quality Enhancement
- Authors: Qingshan Hou, Peng Cao, Jiaqi Wang, Xiaoli Liu, Jinzhu Yang, Osmar R.
Zaiane
- Abstract summary: Low-quality fundus images and style inconsistency potentially increase uncertainty in the diagnosis of fundus disease.
We formulate two self-supervised domain adaptation tasks to disentangle the features of image content, low-quality factor and style information.
Our DASQE method achieves new state-of-the-art performance when only low-quality images are available.
- Score: 14.677912534121273
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Retinal fundus images have been applied for the diagnosis and screening of
eye diseases, such as Diabetic Retinopathy (DR) or Diabetic Macular Edema
(DME). However, both low-quality fundus images and style inconsistency
potentially increase uncertainty in the diagnosis of fundus disease and even
lead to misdiagnosis by ophthalmologists. Most of the existing image
enhancement methods mainly focus on improving the image quality by leveraging
the guidance of high-quality images, which is difficult to be collected in
medical applications. In this paper, we tackle image quality enhancement in a
fully unsupervised setting, i.e., neither paired images nor high-quality
images. To this end, we explore the potential of the self-supervised task for
improving the quality of fundus images without the requirement of high-quality
reference images. Specifically, we construct multiple patch-wise domains via an
auxiliary pre-trained quality assessment network and a style clustering. To
achieve robust low-quality image enhancement and address style inconsistency,
we formulate two self-supervised domain adaptation tasks to disentangle the
features of image content, low-quality factor and style information by
exploring intrinsic supervision signals within the low-quality images.
Extensive experiments are conducted on EyeQ and Messidor datasets, and results
show that our DASQE method achieves new state-of-the-art performance when only
low-quality images are available.
Related papers
- Dual-Branch Network for Portrait Image Quality Assessment [76.27716058987251]
We introduce a dual-branch network for portrait image quality assessment (PIQA)
We utilize two backbone networks (textiti.e., Swin Transformer-B) to extract the quality-aware features from the entire portrait image and the facial image cropped from it.
We leverage LIQE, an image scene classification and quality assessment model, to capture the quality-aware and scene-specific features as the auxiliary features.
arXiv Detail & Related papers (2024-05-14T12:43:43Z) - Helping Visually Impaired People Take Better Quality Pictures [52.03016269364854]
We develop tools to help visually impaired users minimize occurrences of common technical distortions.
We also create a prototype feedback system that helps to guide users to mitigate quality issues.
arXiv Detail & Related papers (2023-05-14T04:37:53Z) - Image Quality-aware Diagnosis via Meta-knowledge Co-embedding [11.14366093273983]
We propose a novel meta-knowledge co-embedding network, consisting of twos: Task Net and Meta Learner.
Task Net constructs an explicit quality information utilization mechanism to enhance diagnosis via knowledge co-embedding features.
Meta Learner ensures the effectiveness and constrains the semantics of these features via meta-learning and joint-encoding masking.
arXiv Detail & Related papers (2023-03-27T09:35:44Z) - OTRE: Where Optimal Transport Guided Unpaired Image-to-Image Translation
Meets Regularization by Enhancing [4.951748109810726]
Optimal retinal image quality is mandated for accurate medical diagnoses and automated analyses.
We propose an unpaired image-to-image translation scheme for mapping low-quality retinal CFPs to high-quality counterparts.
We validated the integrated framework, OTRE, on three publicly available retinal image datasets.
arXiv Detail & Related papers (2023-02-06T18:39:40Z) - Optimal Transport Guided Unsupervised Learning for Enhancing low-quality
Retinal Images [5.4240246179935845]
Real-world non-mydriatic retinal fundus photography is prone to artifacts, imperfections and low-quality.
We propose a simple but effective end-to-end framework for enhancing poor-quality retinal fundus images.
arXiv Detail & Related papers (2023-02-06T18:29:30Z) - UNO-QA: An Unsupervised Anomaly-Aware Framework with Test-Time
Clustering for OCTA Image Quality Assessment [4.901218498977952]
We propose an unsupervised anomaly-aware framework with test-time clustering for optical coherence tomography angiography ( OCTA) image quality assessment.
A feature-embedding-based low-quality representation module is proposed to quantify the quality of OCTA images.
We perform dimension reduction and clustering of multi-scale image features extracted by the trained OCTA quality representation network.
arXiv Detail & Related papers (2022-12-20T18:48:04Z) - Learning Conditional Knowledge Distillation for Degraded-Reference Image
Quality Assessment [157.1292674649519]
We propose a practical solution named degraded-reference IQA (DR-IQA)
DR-IQA exploits the inputs of IR models, degraded images, as references.
Our results can even be close to the performance of full-reference settings.
arXiv Detail & Related papers (2021-08-18T02:35:08Z) - A Mixed-Supervision Multilevel GAN Framework for Image Quality
Enhancement [0.0]
We propose a novel generative adversarial network (GAN) that can leverage training data at multiple levels of quality.
We apply our mixed-supervision GAN to (i) super-resolve histopathology images and (ii) enhance laparoscopy images by combining super-resolution and surgical smoke removal.
Results on large clinical and pre-clinical datasets show the benefits of our mixed-supervision GAN over the state of the art.
arXiv Detail & Related papers (2021-06-29T17:10:41Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Early Exit or Not: Resource-Efficient Blind Quality Enhancement for
Compressed Images [54.40852143927333]
Lossy image compression is pervasively conducted to save communication bandwidth, resulting in undesirable compression artifacts.
We propose a resource-efficient blind quality enhancement (RBQE) approach for compressed images.
Our approach can automatically decide to terminate or continue enhancement according to the assessed quality of enhanced images.
arXiv Detail & Related papers (2020-06-30T07:38:47Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.