Feature Representation Learning for Robust Retinal Disease Detection
from Optical Coherence Tomography Images
- URL: http://arxiv.org/abs/2206.12136v1
- Date: Fri, 24 Jun 2022 07:59:36 GMT
- Title: Feature Representation Learning for Robust Retinal Disease Detection
from Optical Coherence Tomography Images
- Authors: Sharif Amit Kamran, Khondker Fariha Hossain, Alireza Tavakkoli,
Stewart Lee Zuckerbrod, Salah A. Baker
- Abstract summary: Ophthalmic images may contain identical-looking pathologies that can cause failure in automated techniques to distinguish different retinal degenerative diseases.
In this work, we propose a robust disease detection architecture with three learning heads.
Our experimental results on two publicly available OCT datasets illustrate that the proposed model outperforms existing state-of-the-art models in terms of accuracy, interpretability, and robustness for out-of-distribution retinal disease detection.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Ophthalmic images may contain identical-looking pathologies that can cause
failure in automated techniques to distinguish different retinal degenerative
diseases. Additionally, reliance on large annotated datasets and lack of
knowledge distillation can restrict ML-based clinical support systems'
deployment in real-world environments. To improve the robustness and
transferability of knowledge, an enhanced feature-learning module is required
to extract meaningful spatial representations from the retinal subspace. Such a
module, if used effectively, can detect unique disease traits and differentiate
the severity of such retinal degenerative pathologies. In this work, we propose
a robust disease detection architecture with three learning heads, i) A
supervised encoder for retinal disease classification, ii) An unsupervised
decoder for the reconstruction of disease-specific spatial information, and
iii) A novel representation learning module for learning the similarity between
encoder-decoder feature and enhancing the accuracy of the model. Our
experimental results on two publicly available OCT datasets illustrate that the
proposed model outperforms existing state-of-the-art models in terms of
accuracy, interpretability, and robustness for out-of-distribution retinal
disease detection.
Related papers
- Abnormality-Driven Representation Learning for Radiology Imaging [0.8321462983924758]
We introduce lesion-enhanced contrastive learning (LeCL), a novel approach to obtain visual representations driven by abnormalities in 2D axial slices across different locations of the CT scans.
We evaluate our approach across three clinical tasks: tumor lesion location, lung disease detection, and patient staging, benchmarking against four state-of-the-art foundation models.
arXiv Detail & Related papers (2024-11-25T13:53:26Z) - Spatial-aware Transformer-GRU Framework for Enhanced Glaucoma Diagnosis
from 3D OCT Imaging [1.8416014644193066]
We present a novel deep learning framework that leverages the diagnostic value of 3D Optical Coherence Tomography ( OCT) imaging for automated glaucoma detection.
We integrate a pre-trained Vision Transformer on retinal data for rich slice-wise feature extraction and a bidirectional Gated Recurrent Unit for capturing inter-slice spatial dependencies.
Experimental results on a large dataset demonstrate the superior performance of the proposed method over state-of-the-art ones.
arXiv Detail & Related papers (2024-03-08T22:25:15Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - ROCT-Net: A new ensemble deep convolutional model with improved spatial
resolution learning for detecting common diseases from retinal OCT images [0.0]
This paper presents a new enhanced deep ensemble convolutional neural network for detecting retinal diseases from OCT images.
Our model generates rich and multi-resolution features by employing the learning architectures of two robust convolutional models.
Our experiments on two datasets and comparing our model with some other well-known deep convolutional neural networks have proven that our architecture can increase the classification accuracy up to 5%.
arXiv Detail & Related papers (2022-03-03T17:51:01Z) - Multi-Disease Detection in Retinal Imaging based on Ensembling
Heterogeneous Deep Learning Models [0.0]
We propose an innovative multi-disease detection pipeline for retinal imaging.
Our pipeline includes state-of-the-art strategies like transfer learning, class weighting, real-time image augmentation and Focal loss utilization.
arXiv Detail & Related papers (2021-03-26T18:02:17Z) - An Interpretable Multiple-Instance Approach for the Detection of
referable Diabetic Retinopathy from Fundus Images [72.94446225783697]
We propose a machine learning system for the detection of referable Diabetic Retinopathy in fundus images.
By extracting local information from image patches and combining it efficiently through an attention mechanism, our system is able to achieve high classification accuracy.
We evaluate our approach on publicly available retinal image datasets, in which it exhibits near state-of-the-art performance.
arXiv Detail & Related papers (2021-03-02T13:14:15Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - NuI-Go: Recursive Non-Local Encoder-Decoder Network for Retinal Image
Non-Uniform Illumination Removal [96.12120000492962]
The quality of retinal images is often clinically unsatisfactory due to eye lesions and imperfect imaging process.
One of the most challenging quality degradation issues in retinal images is non-uniform illumination.
We propose a non-uniform illumination removal network for retinal image, called NuI-Go.
arXiv Detail & Related papers (2020-08-07T04:31:33Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Improving Robustness using Joint Attention Network For Detecting Retinal
Degeneration From Optical Coherence Tomography Images [0.0]
We propose the use of disease-specific feature representation as a novel architecture comprised of two joint networks.
Our experimental results on publicly available datasets show the proposed joint-network significantly improves the accuracy and robustness of state-of-the-art retinal disease classification networks on unseen datasets.
arXiv Detail & Related papers (2020-05-16T20:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.