Fundus-Enhanced Disease-Aware Distillation Model for Retinal Disease
Classification from OCT Images
- URL: http://arxiv.org/abs/2308.00291v1
- Date: Tue, 1 Aug 2023 05:13:02 GMT
- Title: Fundus-Enhanced Disease-Aware Distillation Model for Retinal Disease
Classification from OCT Images
- Authors: Lehan Wang, Weihang Dai, Mei Jin, Chubin Ou, and Xiaomeng Li
- Abstract summary: We propose a fundus-enhanced disease-aware distillation model for retinal disease classification from OCT images.
Our framework enhances the OCT model during training by utilizing unpaired fundus images.
Our proposed approach outperforms single-modal, multi-modal, and state-of-the-art distillation methods for retinal disease classification.
- Score: 6.72159216082989
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optical Coherence Tomography (OCT) is a novel and effective screening tool
for ophthalmic examination. Since collecting OCT images is relatively more
expensive than fundus photographs, existing methods use multi-modal learning to
complement limited OCT data with additional context from fundus images.
However, the multi-modal framework requires eye-paired datasets of both
modalities, which is impractical for clinical use. To address this problem, we
propose a novel fundus-enhanced disease-aware distillation model (FDDM), for
retinal disease classification from OCT images. Our framework enhances the OCT
model during training by utilizing unpaired fundus images and does not require
the use of fundus images during testing, which greatly improves the
practicality and efficiency of our method for clinical use. Specifically, we
propose a novel class prototype matching to distill disease-related information
from the fundus model to the OCT model and a novel class similarity alignment
to enforce consistency between disease distribution of both modalities.
Experimental results show that our proposed approach outperforms single-modal,
multi-modal, and state-of-the-art distillation methods for retinal disease
classification. Code is available at https://github.com/xmed-lab/FDDM.
Related papers
- Enhancing Retinal Disease Classification from OCTA Images via Active Learning Techniques [0.8035416719640156]
Eye diseases are common in older Americans and can lead to decreased vision and blindness.
Recent advancements in imaging technologies allow clinicians to capture high-quality images of the retinal blood vessels via Optical Coherence Tomography Angiography ( OCTA)
OCTA provides detailed vascular imaging as compared to the solely structural information obtained by common OCT imaging.
arXiv Detail & Related papers (2024-07-21T23:24:49Z) - CC-DCNet: Dynamic Convolutional Neural Network with Contrastive Constraints for Identifying Lung Cancer Subtypes on Multi-modality Images [13.655407979403945]
We propose a novel deep learning network designed to accurately classify lung cancer subtype with multi-dimensional and multi-modality images.
The strength of the proposed model lies in its ability to dynamically process both paired CT-pathological image sets and independent CT image sets.
We also develop a contrastive constraint module, which quantitatively maps the cross-modality associations through network training.
arXiv Detail & Related papers (2024-07-18T01:42:00Z) - Generating Realistic Counterfactuals for Retinal Fundus and OCT Images
using Diffusion Models [36.81751569090276]
Counterfactual reasoning is often used in clinical settings to explain decisions or weigh alternatives.
Here, we demonstrate that using a diffusion model in combination with an adversarially robust classifier trained on retinal disease classification tasks enables the generation of highly realistic counterfactuals.
In a user study, domain experts found the counterfactuals generated using our method significantly more realistic than counterfactuals generated from a previous method, and even indistinguishable from real images.
arXiv Detail & Related papers (2023-11-20T09:28:04Z) - Bridging Synthetic and Real Images: a Transferable and Multiple
Consistency aided Fundus Image Enhancement Framework [61.74188977009786]
We propose an end-to-end optimized teacher-student framework to simultaneously conduct image enhancement and domain adaptation.
We also propose a novel multi-stage multi-attention guided enhancement network (MAGE-Net) as the backbones of our teacher and student network.
arXiv Detail & Related papers (2023-02-23T06:16:15Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - COROLLA: An Efficient Multi-Modality Fusion Framework with Supervised
Contrastive Learning for Glaucoma Grading [1.2250035750661867]
We propose an efficient multi-modality supervised contrastive learning framework, named COROLLA, for glaucoma grading.
We employ supervised contrastive learning to increase our models' discriminative power with better convergence.
On the GAMMA dataset, our COROLLA framework achieves overwhelming glaucoma grading performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2022-01-11T06:00:51Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - Categorical Relation-Preserving Contrastive Knowledge Distillation for
Medical Image Classification [75.27973258196934]
We propose a novel Categorical Relation-preserving Contrastive Knowledge Distillation (CRCKD) algorithm, which takes the commonly used mean-teacher model as the supervisor.
With this regularization, the feature distribution of the student model shows higher intra-class similarity and inter-class variance.
With the contribution of the CCD and CRP, our CRCKD algorithm can distill the relational knowledge more comprehensively.
arXiv Detail & Related papers (2021-07-07T13:56:38Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Learning Two-Stream CNN for Multi-Modal Age-related Macular Degeneration
Categorization [6.023239837661721]
Age-related Macular Degeneration (AMD) is a common macular disease among people over 50.
Previous research efforts mainly focus on AMD categorization with a single-modal input, let it be a color fundus image or an OCT image.
By contrast, we consider AMD categorization given a multi-modal input, a direction that is clinically meaningful yet mostly unexplored.
arXiv Detail & Related papers (2020-12-03T12:50:36Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.