Generative artificial intelligence in ophthalmology: multimodal retinal images for the diagnosis of Alzheimer's disease with convolutional neural networks
- URL: http://arxiv.org/abs/2406.18247v1
- Date: Wed, 26 Jun 2024 10:49:26 GMT
- Title: Generative artificial intelligence in ophthalmology: multimodal retinal images for the diagnosis of Alzheimer's disease with convolutional neural networks
- Authors: I. R. Slootweg, M. Thach, K. R. Curro-Tafili, F. D. Verbraak, F. H. Bouwman, Y. A. L. Pijnenburg, J. F. Boer, J. H. P. de Kwisthout, L. Bagheriye, P. J. González,
- Abstract summary: This study aims to predict Amyloid Positron Emission Tomography (AmyloidPET) status with multimodal retinal imaging and convolutional neural networks (CNNs)
Denoising Diffusion Probabilistic Models (DDPMs) were trained to generate synthetic images.
Unimodal CNNs were pretrained on synthetic data and finetuned on real data or trained solely on real data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background/Aim. This study aims to predict Amyloid Positron Emission Tomography (AmyloidPET) status with multimodal retinal imaging and convolutional neural networks (CNNs) and to improve the performance through pretraining with synthetic data. Methods. Fundus autofluorescence, optical coherence tomography (OCT), and OCT angiography images from 328 eyes of 59 AmyloidPET positive subjects and 108 AmyloidPET negative subjects were used for classification. Denoising Diffusion Probabilistic Models (DDPMs) were trained to generate synthetic images and unimodal CNNs were pretrained on synthetic data and finetuned on real data or trained solely on real data. Multimodal classifiers were developed to combine predictions of the four unimodal CNNs with patient metadata. Class activation maps of the unimodal classifiers provided insight into the network's attention to inputs. Results. DDPMs generated diverse, realistic images without memorization. Pretraining unimodal CNNs with synthetic data improved AUPR at most from 0.350 to 0.579. Integration of metadata in multimodal CNNs improved AUPR from 0.486 to 0.634, which was the best overall best classifier. Class activation maps highlighted relevant retinal regions which correlated with AD. Conclusion. Our method for generating and leveraging synthetic data has the potential to improve AmyloidPET prediction from multimodal retinal imaging. A DDPM can generate realistic and unique multimodal synthetic retinal images. Our best performing unimodal and multimodal classifiers were not pretrained on synthetic data, however pretraining with synthetic data slightly improved classification performance for two out of the four modalities.
Related papers
- Enhancing Transfer Learning for Medical Image Classification with SMOTE: A Comparative Study [0.0]
This paper explores and enhances the application of Transfer Learning (TL) for multilabel image classification in medical imaging.
Our results show that TL models excel in brain tumor classification, achieving near-optimal metrics.
We integrate the Synthetic Minority Oversampling computation Technique (SMOTE) with TL and traditional machine learning(ML) methods, which improves accuracy by 1.97%, recall (sensitivity) by 5.43%, and specificity by 0.72%.
arXiv Detail & Related papers (2024-12-28T18:15:07Z) - Diffusion-Based Approaches in Medical Image Generation and Analysis [0.7834170106487724]
Data scarcity in medical imaging poses significant challenges due to privacy concerns.
Questions remain about the performance of convolutional neural network (CNN) models on original and synthetic datasets.
In this study, we investigated the effectiveness of diffusion models for generating synthetic medical images to train CNNs in three domains.
arXiv Detail & Related papers (2024-12-22T05:02:05Z) - MaskTerial: A Foundation Model for Automated 2D Material Flake Detection [48.73213960205105]
We present a deep learning model, called MaskTerial, that uses an instance segmentation network to reliably identify 2D material flakes.
The model is extensively pre-trained using a synthetic data generator, that generates realistic microscopy images from unlabeled data.
We demonstrate significant improvements over existing techniques in the detection of low-contrast materials such as hexagonal boron nitride.
arXiv Detail & Related papers (2024-12-12T15:01:39Z) - Embryo 2.0: Merging Synthetic and Real Data for Advanced AI Predictions [69.07284335967019]
We train two generative models using two datasets, one created and made publicly available, and one existing public dataset.
We generate synthetic embryo images at various cell stages, including 2-cell, 4-cell, 8-cell, morula, and blastocyst.
These were combined with real images to train classification models for embryo cell stage prediction.
arXiv Detail & Related papers (2024-12-02T08:24:49Z) - Dataset Distillation for Histopathology Image Classification [46.04496989951066]
We introduce a novel dataset distillation algorithm tailored for histopathology image datasets (Histo-DD)
We conduct a comprehensive evaluation of the effectiveness of the proposed algorithm and the generated histopathology samples in both patch-level and slide-level classification tasks.
arXiv Detail & Related papers (2024-08-19T05:53:38Z) - Self-Supervised Pretext Tasks for Alzheimer's Disease Classification using 3D Convolutional Neural Networks on Large-Scale Synthetic Neuroimaging Dataset [11.173478552040441]
Alzheimer's Disease (AD) induces both localised and widespread neural degenerative changes throughout the brain.
In this work, we evaluated several unsupervised methods to train a feature extractor for downstream AD vs. CN classification.
arXiv Detail & Related papers (2024-06-20T11:26:32Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - Brain Imaging-to-Graph Generation using Adversarial Hierarchical Diffusion Models for MCI Causality Analysis [44.45598796591008]
Brain imaging-to-graph generation (BIGG) framework is proposed to map functional magnetic resonance imaging (fMRI) into effective connectivity for mild cognitive impairment analysis.
The hierarchical transformers in the generator are designed to estimate the noise at multiple scales.
Evaluations of the ADNI dataset demonstrate the feasibility and efficacy of the proposed model.
arXiv Detail & Related papers (2023-05-18T06:54:56Z) - Early Diagnosis of Retinal Blood Vessel Damage via Deep Learning-Powered
Collective Intelligence Models [0.3670422696827525]
The power of swarm algorithms is used to search for various combinations of convolutional, pooling, and normalization layers to provide the best model for the task.
The best TDCN model achieves an accuracy of 90.3%, AUC ROC of 0.956, and a Cohen score of 0.967.
arXiv Detail & Related papers (2022-10-17T21:38:38Z) - Fader Networks for domain adaptation on fMRI: ABIDE-II study [68.5481471934606]
We use 3D convolutional autoencoders to build the domain irrelevant latent space image representation and demonstrate this method to outperform existing approaches on ABIDE data.
arXiv Detail & Related papers (2020-10-14T16:50:50Z) - Image Translation for Medical Image Generation -- Ischemic Stroke
Lesions [0.0]
Synthetic databases with annotated pathologies could provide the required amounts of training data.
We train different image-to-image translation models to synthesize magnetic resonance images of brain volumes with and without stroke lesions.
We show that for a small database of only 10 or 50 clinical cases, synthetic data augmentation yields significant improvement.
arXiv Detail & Related papers (2020-10-05T09:12:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.