Predicting survival of glioblastoma from automatic whole-brain and tumor
segmentation of MR images
- URL: http://arxiv.org/abs/2109.12334v1
- Date: Sat, 25 Sep 2021 10:49:51 GMT
- Title: Predicting survival of glioblastoma from automatic whole-brain and tumor
segmentation of MR images
- Authors: Sveinn P\'alsson, Stefano Cerri, Hans Skovgaard Poulsen, Thomas Urup,
Ian Law, Koen Van Leemput
- Abstract summary: We introduce novel imaging features that can be automatically computed from MR images and fed into machine learning models to predict patient survival.
The features measure the deformation caused by the tumor on the surrounding brain structures, comparing the shape of various structures in the patient's brain to their expected shape in healthy individuals.
We show that the proposed features carry prognostic value in terms of overall- and progression-free survival, over and above that of conventional non-imaging features.
- Score: 1.0179233457605892
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Survival prediction models can potentially be used to guide treatment of
glioblastoma patients. However, currently available MR imaging biomarkers
holding prognostic information are often challenging to interpret, have
difficulties generalizing across data acquisitions, or are only applicable to
pre-operative MR data. In this paper we aim to address these issues by
introducing novel imaging features that can be automatically computed from MR
images and fed into machine learning models to predict patient survival. The
features we propose have a direct biological interpretation: They measure the
deformation caused by the tumor on the surrounding brain structures, comparing
the shape of various structures in the patient's brain to their expected shape
in healthy individuals. To obtain the required segmentations, we use an
automatic method that is contrast-adaptive and robust to missing modalities,
making the features generalizable across scanners and imaging protocols. Since
the features we propose do not depend on characteristics of the tumor region
itself, they are also applicable to post-operative images, which have been much
less studied in the context of survival prediction. Using experiments involving
both pre- and post-operative data, we show that the proposed features carry
prognostic value in terms of overall- and progression-free survival, over and
above that of conventional non-imaging features.
Related papers
- Individualized multi-horizon MRI trajectory prediction for Alzheimer's Disease [0.0]
We train a novel architecture to build a latent space distribution which can be sampled from to generate future predictions of changing anatomy.
By comparing to several alternatives, we show that our model produces more individualized images with higher resolution.
arXiv Detail & Related papers (2024-08-04T13:09:06Z) - Probabilistic 3D Correspondence Prediction from Sparse Unsegmented Images [1.2179682412409507]
We propose SPI-CorrNet, a unified model that predicts 3D correspondences from sparse imaging data.
Experiments on the LGE MRI left atrium dataset and Abdomen CT-1K liver datasets demonstrate that our technique enhances the accuracy and robustness of sparse image-driven SSM.
arXiv Detail & Related papers (2024-07-02T03:56:20Z) - Psychometry: An Omnifit Model for Image Reconstruction from Human Brain Activity [60.983327742457995]
Reconstructing the viewed images from human brain activity bridges human and computer vision through the Brain-Computer Interface.
We devise Psychometry, an omnifit model for reconstructing images from functional Magnetic Resonance Imaging (fMRI) obtained from different subjects.
arXiv Detail & Related papers (2024-03-29T07:16:34Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report
Generation [47.250147322130545]
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images.
Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists.
We present a novel multi-modal deep neural network framework for generating chest X-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes.
arXiv Detail & Related papers (2023-11-18T14:37:53Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - Deep Learning for Cancer Prognosis Prediction Using Portrait Photos by StyleGAN Embedding [5.225384984555151]
Survival prediction for cancer patients is critical for optimal treatment selection and patient management.
Current patient survival prediction methods typically extract survival information from patients' clinical record data or biological and imaging data.
In this work, the efficacy of objectively capturing and using prognostic information contained in conventional portrait photographs using deep learning for survival predication purposes is investigated.
arXiv Detail & Related papers (2023-06-26T11:13:22Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - Counterfactual Image Synthesis for Discovery of Personalized Predictive
Image Markers [0.293168019422713]
We show how a deep conditional generative model can be used to perturb local imaging features in baseline images that are pertinent to subject-specific future disease evolution.
Our model produces counterfactuals with changes in imaging features that reflect established clinical markers predictive of future MRI lesional activity at the population level.
arXiv Detail & Related papers (2022-08-03T18:58:45Z) - IAIA-BL: A Case-based Interpretable Deep Learning Model for
Classification of Mass Lesions in Digital Mammography [20.665935997959025]
Interpretability in machine learning models is important in high-stakes decisions.
We present a framework for interpretable machine learning-based mammography.
arXiv Detail & Related papers (2021-03-23T05:00:21Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.