Machine Learning Prediction of Cardiovascular Risk in Type 1 Diabetes Mellitus Using Radiomics Features from Multimodal Retinal Images
- URL: http://arxiv.org/abs/2504.02868v1
- Date: Tue, 01 Apr 2025 10:25:38 GMT
- Title: Machine Learning Prediction of Cardiovascular Risk in Type 1 Diabetes Mellitus Using Radiomics Features from Multimodal Retinal Images
- Authors: Ariadna Tohà-Dalmau, Josep Rosinés-Fonoll, Enrique Romero, Ferran Mazzanti, Ruben Martin-Pinardel, Sonia Marias-Perez, Carolina Bernal-Morales, Rafael Castro-Dominguez, Andrea Mendez, Emilio Ortega, Irene Vinagre, Marga Gimenez, Alfredo Vellido, Javier Zarranz-Ventura,
- Abstract summary: Radiomic features were extracted from fundus retinography, optical coherence tomography, and OCT angiography images.<n>Radiomics combined with OCT+ OCTA metrics and ocular data achieved an AUC of (0.89 $pm$ 0.02) without systemic data input.<n>These results demonstrate that radiomic features obtained from multimodal retinal images are useful for discriminating and classifying CV risk labels.
- Score: 0.050721462368721396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study aimed to develop a machine learning (ML) algorithm capable of determining cardiovascular risk in multimodal retinal images from patients with type 1 diabetes mellitus, distinguishing between moderate, high, and very high-risk levels. Radiomic features were extracted from fundus retinography, optical coherence tomography (OCT), and OCT angiography (OCTA) images. ML models were trained using these features either individually or combined with clinical data. A dataset of 597 eyes (359 individuals) was analyzed, and models trained only with radiomic features achieved AUC values of (0.79 $\pm$ 0.03) for identifying moderate risk cases from high and very high-risk cases, and (0.73 $\pm$ 0.07) for distinguishing between high and very high-risk cases. The addition of clinical variables improved all AUC values, reaching (0.99 $\pm$ 0.01) for identifying moderate risk cases and (0.95 $\pm$ 0.02) for differentiating between high and very high-risk cases. For very high CV risk, radiomics combined with OCT+OCTA metrics and ocular data achieved an AUC of (0.89 $\pm$ 0.02) without systemic data input. These results demonstrate that radiomic features obtained from multimodal retinal images are useful for discriminating and classifying CV risk labels, highlighting the potential of this oculomics approach for CV risk assessment.
Related papers
- Is an Ultra Large Natural Image-Based Foundation Model Superior to a Retina-Specific Model for Detecting Ocular and Systemic Diseases? [15.146396276161937]
RETFound and DINOv2 models were evaluated for ocular disease detection and systemic disease prediction tasks.<n> RETFound achieved superior performance over all DINOv2 models in predicting heart failure, infarction, and ischaemic stroke.
arXiv Detail & Related papers (2025-02-10T09:31:39Z) - Integrating Deep Learning with Fundus and Optical Coherence Tomography for Cardiovascular Disease Prediction [47.7045293755736]
Early identification of patients at risk of cardiovascular diseases (CVD) is crucial for effective preventive care, reducing healthcare burden, and improving patients' quality of life.
This study demonstrates the potential of retinal optical coherence tomography ( OCT) imaging combined with fundus photographs for identifying future adverse cardiac events.
We propose a novel binary classification network based on a Multi-channel Variational Autoencoder (MCVAE), which learns a latent embedding of patients' fundus and OCT images to classify individuals into two groups: those likely to develop CVD in the future and those who are not.
arXiv Detail & Related papers (2024-10-18T12:37:51Z) - Early Risk Assessment Model for ICA Timing Strategy in Unstable Angina Patients Using Multi-Modal Machine Learning [5.070772577047069]
Invasive coronary arteriography (ICA) is recognized as the gold standard for diagnosing cardiovascular diseases, including unstable angina (UA)
Unlike myocardial infarction, UA does not have specific indicators like ST-segment deviation or cardiac enzymes, making risk assessment complex.
Our study aims to enhance the early risk assessment for UA patients by utilizing machine learning algorithms.
arXiv Detail & Related papers (2024-08-08T07:24:28Z) - Predicting risk of cardiovascular disease using retinal OCT imaging [40.71667870702634]
Cardiovascular diseases (CVD) are the leading cause of death globally.<n>Optical coherence tomography ( OCT) has gained recognition as a potential tool for early CVD risk prediction.<n>We investigated the potential of OCT as an additional imaging technique to predict future CVD events.
arXiv Detail & Related papers (2024-03-26T14:42:46Z) - Learning to diagnose cirrhosis from radiological and histological labels
with joint self and weakly-supervised pretraining strategies [62.840338941861134]
We propose to leverage transfer learning from large datasets annotated by radiologists, to predict the histological score available on a small annex dataset.
We compare different pretraining methods, namely weakly-supervised and self-supervised ones, to improve the prediction of the cirrhosis.
This method outperforms the baseline classification of the METAVIR score, reaching an AUC of 0.84 and a balanced accuracy of 0.75.
arXiv Detail & Related papers (2023-02-16T17:06:23Z) - Non-invasive Liver Fibrosis Screening on CT Images using Radiomics [0.0]
The aim of this study was to develop and evaluate a radiomics machine learning model for detecting liver fibrosis on CT of the liver.
The combination and selected features with the highest AUC were used to develop a final liver fibrosis screening model.
arXiv Detail & Related papers (2022-11-25T22:33:22Z) - Self-supervised contrastive learning of echocardiogram videos enables
label-efficient cardiac disease diagnosis [48.64462717254158]
We developed a self-supervised contrastive learning approach, EchoCLR, to catered to echocardiogram videos.
When fine-tuned on small portions of labeled data, EchoCLR pretraining significantly improved classification performance for left ventricular hypertrophy (LVH) and aortic stenosis (AS)
EchoCLR is unique in its ability to learn representations of medical videos and demonstrates that SSL can enable label-efficient disease classification from small, labeled datasets.
arXiv Detail & Related papers (2022-07-23T19:17:26Z) - Controlling False Positive/Negative Rates for Deep-Learning-Based
Prostate Cancer Detection on Multiparametric MR images [58.85481248101611]
We propose a novel PCa detection network that incorporates a lesion-level cost-sensitive loss and an additional slice-level loss based on a lesion-to-slice mapping function.
Our experiments based on 290 clinical patients concludes that 1) The lesion-level FNR was effectively reduced from 0.19 to 0.10 and the lesion-level FPR was reduced from 1.03 to 0.66 by changing the lesion-level cost.
arXiv Detail & Related papers (2021-06-04T09:51:27Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Co-Heterogeneous and Adaptive Segmentation from Multi-Source and
Multi-Phase CT Imaging Data: A Study on Pathological Liver and Lesion
Segmentation [48.504790189796836]
We present a novel segmentation strategy, co-heterogenous and adaptive segmentation (CHASe)
We propose a versatile framework that fuses appearance based semi-supervision, mask based adversarial domain adaptation, and pseudo-labeling.
CHASe can further improve pathological liver mask Dice-Sorensen coefficients by ranges of $4.2% sim 9.4%$.
arXiv Detail & Related papers (2020-05-27T06:58:39Z) - Machine-Learning-Based Multiple Abnormality Prediction with Large-Scale
Chest Computed Tomography Volumes [64.21642241351857]
We curated and analyzed a chest computed tomography (CT) data set of 36,316 volumes from 19,993 unique patients.
We developed a rule-based method for automatically extracting abnormality labels from free-text radiology reports.
We also developed a model for multi-organ, multi-disease classification of chest CT volumes.
arXiv Detail & Related papers (2020-02-12T00:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.