Conditional Generative Models for Contrast-Enhanced Synthesis of T1w and T1 Maps in Brain MRI
- URL: http://arxiv.org/abs/2410.08894v1
- Date: Fri, 11 Oct 2024 15:11:24 GMT
- Title: Conditional Generative Models for Contrast-Enhanced Synthesis of T1w and T1 Maps in Brain MRI
- Authors: Moritz Piening, Fabian Altekrüger, Gabriele Steidl, Elke Hattingen, Eike Steidl,
- Abstract summary: We study the potential of generative models, more precisely conditional diffusion and flow matching, for virtual enhancement.
We examine the performance of T1 scans from quantitive MRI versus T1-weighted scans.
Across models, we observe better segmentations with T1 scans than with T1-weighted scans.
- Score: 1.6124737486286778
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contrast enhancement by Gadolinium-based contrast agents (GBCAs) is a vital tool for tumor diagnosis in neuroradiology. Based on brain MRI scans of glioblastoma before and after Gadolinium administration, we address enhancement prediction by neural networks with two new contributions. Firstly, we study the potential of generative models, more precisely conditional diffusion and flow matching, for uncertainty quantification in virtual enhancement. Secondly, we examine the performance of T1 scans from quantitive MRI versus T1-weighted scans. In contrast to T1-weighted scans, these scans have the advantage of a physically meaningful and thereby comparable voxel range. To compare network prediction performance of these two modalities with incompatible gray-value scales, we propose to evaluate segmentations of contrast-enhanced regions of interest using Dice and Jaccard scores. Across models, we observe better segmentations with T1 scans than with T1-weighted scans.
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - T1-contrast Enhanced MRI Generation from Multi-parametric MRI for Glioma Patients with Latent Tumor Conditioning [1.581761125201628]
Gadolinium-based contrast agents (GBCAs) are commonly used in MRI scans of patients with gliomas.
There is growing concern about GBCA toxicity.
This study develops a deep-learning framework to generate T1-postcontrast (T1C) from pre-contrast multi- MRI.
arXiv Detail & Related papers (2024-09-03T05:45:37Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - Spatiotemporal Feature Learning Based on Two-Step LSTM and Transformer
for CT Scans [2.3682456328966115]
We propose a novel, effective, two-step-wise approach to tickle this issue for COVID-19 symptom classification thoroughly.
First, the semantic feature embedding of each slice for a CT scan is extracted by conventional backbone networks.
Then, we proposed a long short-term memory (LSTM) and Transformer-based sub-network to deal with temporal feature learning.
arXiv Detail & Related papers (2022-07-04T16:59:05Z) - Parotid Gland MRI Segmentation Based on Swin-Unet and Multimodal Images [7.934520786027202]
Parotid gland tumors account for approximately 2% to 10% of head and neck tumors.
Deep learning methods have developed rapidly, especially Transformer beats the traditional convolutional neural network in computer vision.
The DSC of the model on the test set was 88.63%, MPA was 99.31%, MIoU was 83.99%, and HD was 3.04.
arXiv Detail & Related papers (2022-06-07T14:20:53Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Multi-modal Brain Tumor Segmentation via Missing Modality Synthesis and
Modality-level Attention Fusion [3.9562534927482704]
We propose an end-to-end framework named Modality-Level Attention Fusion Network (MAF-Net)
Our proposed MAF-Net is found to yield superior T1ce synthesis performance and accurate brain tumor segmentation.
arXiv Detail & Related papers (2022-03-09T09:08:48Z) - Contrast-enhanced MRI Synthesis Using 3D High-Resolution ConvNets [7.892005877717236]
Gadolinium-based contrast agents (GBCAs) have been widely used to better visualize disease in brain magnetic resonance imaging (MRI)
For brain tumor patients, standard-of-care includes repeated MRI with gadolinium-based contrast for disease monitoring, increasing the risk of gadolinium deposition.
We present a deep learning based approach for contrast-enhanced T1 synthesis on brain tumor patients.
arXiv Detail & Related papers (2021-04-04T11:54:15Z) - Confidence-guided Lesion Mask-based Simultaneous Synthesis of Anatomic
and Molecular MR Images in Patients with Post-treatment Malignant Gliomas [65.64363834322333]
Confidence Guided SAMR (CG-SAMR) synthesizes data from lesion information to multi-modal anatomic sequences.
module guides the synthesis based on confidence measure about the intermediate results.
experiments on real clinical data demonstrate that the proposed model can perform better than the state-of-theart synthesis methods.
arXiv Detail & Related papers (2020-08-06T20:20:22Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.