Predicting survival of glioblastoma from automatic whole-brain and tumor
segmentation of MR images
- URL: http://arxiv.org/abs/2109.12334v1
- Date: Sat, 25 Sep 2021 10:49:51 GMT
- Title: Predicting survival of glioblastoma from automatic whole-brain and tumor
segmentation of MR images
- Authors: Sveinn P\'alsson, Stefano Cerri, Hans Skovgaard Poulsen, Thomas Urup,
Ian Law, Koen Van Leemput
- Abstract summary: We introduce novel imaging features that can be automatically computed from MR images and fed into machine learning models to predict patient survival.
The features measure the deformation caused by the tumor on the surrounding brain structures, comparing the shape of various structures in the patient's brain to their expected shape in healthy individuals.
We show that the proposed features carry prognostic value in terms of overall- and progression-free survival, over and above that of conventional non-imaging features.
- Score: 1.0179233457605892
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Survival prediction models can potentially be used to guide treatment of
glioblastoma patients. However, currently available MR imaging biomarkers
holding prognostic information are often challenging to interpret, have
difficulties generalizing across data acquisitions, or are only applicable to
pre-operative MR data. In this paper we aim to address these issues by
introducing novel imaging features that can be automatically computed from MR
images and fed into machine learning models to predict patient survival. The
features we propose have a direct biological interpretation: They measure the
deformation caused by the tumor on the surrounding brain structures, comparing
the shape of various structures in the patient's brain to their expected shape
in healthy individuals. To obtain the required segmentations, we use an
automatic method that is contrast-adaptive and robust to missing modalities,
making the features generalizable across scanners and imaging protocols. Since
the features we propose do not depend on characteristics of the tumor region
itself, they are also applicable to post-operative images, which have been much
less studied in the context of survival prediction. Using experiments involving
both pre- and post-operative data, we show that the proposed features carry
prognostic value in terms of overall- and progression-free survival, over and
above that of conventional non-imaging features.
Related papers
- Interpretability and Individuality in Knee MRI: Patient-Specific Radiomic Fingerprint with Reconstructed Healthy Personas [40.168029561784216]
A radiomic fingerprint is a patient-specific feature set derived from MRI.<n>A healthy persona synthesises a pathology-free baseline for each patient.<n>Comparing features extracted from pathological images against their personas highlights deviations from normal anatomy.
arXiv Detail & Related papers (2026-01-13T14:48:01Z) - Deep Learning-Based Computer Vision Models for Early Cancer Detection Using Multimodal Medical Imaging and Radiogenomic Integration Frameworks [0.0]
Early cancer detection remains one of the most critical challenges in modern healthcare.<n>Recent advancements in artificial intelligence, particularly deep learning, have enabled transformative progress in medical imaging analysis.<n>Deep learning-based computer vision models can automatically extract complex spatial, morphological, and temporal patterns from multimodal imaging data.
arXiv Detail & Related papers (2025-11-30T03:28:48Z) - Live(r) Die: Predicting Survival in Colorectal Liver Metastasis [0.01268579273097071]
Colorectal cancer frequently metastasizes to the liver, significantly reducing long-term survival.<n>Current prognostic models, often based on limited clinical or molecular features, lack sufficient predictive power.<n>We present a fully automated framework for surgical outcome prediction from pre- and post-contrast MRI.
arXiv Detail & Related papers (2025-09-10T19:02:59Z) - impuTMAE: Multi-modal Transformer with Masked Pre-training for Missing Modalities Imputation in Cancer Survival Prediction [75.43342771863837]
We introduce impuTMAE, a novel transformer-based end-to-end approach with an efficient multimodal pre-training strategy.<n>It learns inter- and intra-modal interactions while simultaneously imputing missing modalities by reconstructing masked patches.<n>Our model is pre-trained on heterogeneous, incomplete data and fine-tuned for glioma survival prediction using TCGA-GBM/LGG and BraTS datasets.
arXiv Detail & Related papers (2025-08-08T10:01:16Z) - Glioblastoma Overall Survival Prediction With Vision Transformers [6.318465743962574]
Glioblastoma is one of the most aggressive and common brain tumors, with a median survival of 10-15 months.<n>In this study, we propose a novel Artificial Intelligence (AI) approach for Overall Survival (OS) prediction using Magnetic Resonance Imaging (MRI) images.<n>We exploit Vision Transformers (ViTs) to extract hidden features directly from MRI images, eliminating the need of tumor segmentation.<n>The proposed model was evaluated on the BRATS dataset, reaching an accuracy of 62.5% on the test set, comparable to the top-performing methods.
arXiv Detail & Related papers (2025-08-04T13:59:57Z) - SurgeryLSTM: A Time-Aware Neural Model for Accurate and Explainable Length of Stay Prediction After Spine Surgery [44.119171920037196]
We develop and evaluate machine learning (ML) models for predicting length of stay (LOS) in elective spine surgery.<n>We compare traditional ML models with our developed model, SurgeryLSTM, a masked bidirectional long short-term memory (BiLSTM) with an attention.<n>Performance was evaluated using the coefficient of determination (R2) and key predictors were identified using explainable AI.
arXiv Detail & Related papers (2025-07-15T01:18:28Z) - Uncovering Neuroimaging Biomarkers of Brain Tumor Surgery with AI-Driven Methods [8.477573894448051]
We develop a novel framework that integrates explainable artificial intelligence (XAI) with neuroimaging-based feature engineering for survival assessment.<n>From a clinical perspective, our findings provide important evidence that survival after oncological surgery is influenced by alterations in regions related to cognitive and sensory functions.
arXiv Detail & Related papers (2025-07-07T11:11:55Z) - MIL vs. Aggregation: Evaluating Patient-Level Survival Prediction Strategies Using Graph-Based Learning [52.231128973251124]
We compare various strategies for predicting survival at the WSI and patient level.
The former treats each WSI as an independent sample, mimicking the strategy adopted in other works.
The latter comprises methods to either aggregate the predictions of the several WSIs or automatically identify the most relevant slide.
arXiv Detail & Related papers (2025-03-29T11:14:02Z) - ContextMRI: Enhancing Compressed Sensing MRI through Metadata Conditioning [51.26601171361753]
We propose ContextMRI, a text-conditioned diffusion model for MRI that integrates granular metadata into the reconstruction process.
We show that increasing the fidelity of metadata, ranging from slice location and contrast to patient age, sex, and pathology, systematically boosts reconstruction performance.
arXiv Detail & Related papers (2025-01-08T05:15:43Z) - Individualized multi-horizon MRI trajectory prediction for Alzheimer's Disease [0.0]
We train a novel architecture to build a latent space distribution which can be sampled from to generate future predictions of changing anatomy.
By comparing to several alternatives, we show that our model produces more individualized images with higher resolution.
arXiv Detail & Related papers (2024-08-04T13:09:06Z) - Probabilistic 3D Correspondence Prediction from Sparse Unsegmented Images [1.2179682412409507]
We propose SPI-CorrNet, a unified model that predicts 3D correspondences from sparse imaging data.
Experiments on the LGE MRI left atrium dataset and Abdomen CT-1K liver datasets demonstrate that our technique enhances the accuracy and robustness of sparse image-driven SSM.
arXiv Detail & Related papers (2024-07-02T03:56:20Z) - Psychometry: An Omnifit Model for Image Reconstruction from Human Brain Activity [60.983327742457995]
Reconstructing the viewed images from human brain activity bridges human and computer vision through the Brain-Computer Interface.
We devise Psychometry, an omnifit model for reconstructing images from functional Magnetic Resonance Imaging (fMRI) obtained from different subjects.
arXiv Detail & Related papers (2024-03-29T07:16:34Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report
Generation [47.250147322130545]
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images.
Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists.
We present a novel multi-modal deep neural network framework for generating chest X-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes.
arXiv Detail & Related papers (2023-11-18T14:37:53Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - Deep Learning for Cancer Prognosis Prediction Using Portrait Photos by StyleGAN Embedding [5.225384984555151]
Survival prediction for cancer patients is critical for optimal treatment selection and patient management.
Current patient survival prediction methods typically extract survival information from patients' clinical record data or biological and imaging data.
In this work, the efficacy of objectively capturing and using prognostic information contained in conventional portrait photographs using deep learning for survival predication purposes is investigated.
arXiv Detail & Related papers (2023-06-26T11:13:22Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - Counterfactual Image Synthesis for Discovery of Personalized Predictive
Image Markers [0.293168019422713]
We show how a deep conditional generative model can be used to perturb local imaging features in baseline images that are pertinent to subject-specific future disease evolution.
Our model produces counterfactuals with changes in imaging features that reflect established clinical markers predictive of future MRI lesional activity at the population level.
arXiv Detail & Related papers (2022-08-03T18:58:45Z) - IAIA-BL: A Case-based Interpretable Deep Learning Model for
Classification of Mass Lesions in Digital Mammography [20.665935997959025]
Interpretability in machine learning models is important in high-stakes decisions.
We present a framework for interpretable machine learning-based mammography.
arXiv Detail & Related papers (2021-03-23T05:00:21Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.