Trustworthy Contrast-enhanced Brain MRI Synthesis
- URL: http://arxiv.org/abs/2407.07372v1
- Date: Wed, 10 Jul 2024 05:17:01 GMT
- Title: Trustworthy Contrast-enhanced Brain MRI Synthesis
- Authors: Jiyao Liu, Yuxin Li, Shangqi Gao, Yuncheng Zhou, Xin Gao, Ningsheng Xu, Xiao-Yong Zhang, Xiahai Zhuang,
- Abstract summary: Multi-modality medical image translation aims to synthesize CE-MRI images from other modalities.
We introduce TrustI2I, a novel trustworthy method that reformulates multi-to-one medical image translation problem as a multimodal regression problem.
- Score: 27.43375565176473
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrast-enhanced brain MRI (CE-MRI) is a valuable diagnostic technique but may pose health risks and incur high costs. To create safer alternatives, multi-modality medical image translation aims to synthesize CE-MRI images from other available modalities. Although existing methods can generate promising predictions, they still face two challenges, i.e., exhibiting over-confidence and lacking interpretability on predictions. To address the above challenges, this paper introduces TrustI2I, a novel trustworthy method that reformulates multi-to-one medical image translation problem as a multimodal regression problem, aiming to build an uncertainty-aware and reliable system. Specifically, our method leverages deep evidential regression to estimate prediction uncertainties and employs an explicit intermediate and late fusion strategy based on the Mixture of Normal Inverse Gamma (MoNIG) distribution, enhancing both synthesis quality and interpretability. Additionally, we incorporate uncertainty calibration to improve the reliability of uncertainty. Validation on the BraTS2018 dataset demonstrates that our approach surpasses current methods, producing higher-quality images with rational uncertainty estimation.
Related papers
- ETSCL: An Evidence Theory-Based Supervised Contrastive Learning Framework for Multi-modal Glaucoma Grading [7.188153974946432]
Glaucoma is one of the leading causes of vision impairment.
It remains challenging to extract reliable features due to the high similarity of medical images and the unbalanced multi-modal data distribution.
We propose a novel framework, namely ETSCL, which consists of a contrastive feature extraction stage and a decision-level fusion stage.
arXiv Detail & Related papers (2024-07-19T11:57:56Z) - Confidence-aware multi-modality learning for eye disease screening [58.861421804458395]
We propose a novel multi-modality evidential fusion pipeline for eye disease screening.
It provides a measure of confidence for each modality and elegantly integrates the multi-modality information.
Experimental results on both public and internal datasets demonstrate that our model excels in robustness.
arXiv Detail & Related papers (2024-05-28T13:27:30Z) - Uncertainty Estimation in Contrast-Enhanced MR Image Translation with
Multi-Axis Fusion [6.727287631338148]
We propose a novel model uncertainty quantification method, Multi-Axis Fusion (MAF)
The proposed approach is applied to the task of synthesizing contrast enhanced T1-weighted images based on native T1, T2 and T2-FLAIR scans.
arXiv Detail & Related papers (2023-11-20T20:09:48Z) - Calibrating Multimodal Learning [94.65232214643436]
We propose a novel regularization technique, i.e., Calibrating Multimodal Learning (CML) regularization, to calibrate the predictive confidence of previous methods.
This technique could be flexibly equipped by existing models and improve the performance in terms of confidence calibration, classification accuracy, and model robustness.
arXiv Detail & Related papers (2023-06-02T04:29:57Z) - Reliable Multimodality Eye Disease Screening via Mixture of Student's t
Distributions [49.4545260500952]
We introduce a novel multimodality evidential fusion pipeline for eye disease screening, EyeMoSt.
Our model estimates both local uncertainty for unimodality and global uncertainty for the fusion modality to produce reliable classification results.
Our experimental findings on both public and in-house datasets show that our model is more reliable than current methods.
arXiv Detail & Related papers (2023-03-17T06:18:16Z) - Disentangled Uncertainty and Out of Distribution Detection in Medical
Generative Models [7.6146285961466]
We study disentangled uncertainties in image to image translation tasks in the medical domain.
We use CycleGAN to convert T1-weighted brain MRI scans to T2-weighted brain MRI scans.
arXiv Detail & Related papers (2022-11-11T14:45:16Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Trustworthy Multimodal Regression with Mixture of Normal-inverse Gamma
Distributions [91.63716984911278]
We introduce a novel Mixture of Normal-Inverse Gamma distributions (MoNIG) algorithm, which efficiently estimates uncertainty in principle for adaptive integration of different modalities and produces a trustworthy regression result.
Experimental results on both synthetic and different real-world data demonstrate the effectiveness and trustworthiness of our method on various multimodal regression tasks.
arXiv Detail & Related papers (2021-11-11T14:28:12Z) - Bayesian Uncertainty Estimation of Learned Variational MRI
Reconstruction [63.202627467245584]
We introduce a Bayesian variational framework to quantify the model-immanent (epistemic) uncertainty.
We demonstrate that our approach yields competitive results for undersampled MRI reconstruction.
arXiv Detail & Related papers (2021-02-12T18:08:14Z) - Bayesian Conditional GAN for MRI Brain Image Synthesis [0.0]
We propose to use Bayesian conditional generative adversarial network (GAN) with concrete dropout to improve image synthesis accuracy.
The method is validated with the T1w to T2w MR image translation with a brain tumor dataset of 102 subjects.
Compared with the conventional Bayesian neural network with Monte Carlo dropout, results of the proposed method reach a significant lower RMSE with a p-value of 0.0186.
arXiv Detail & Related papers (2020-05-25T00:58:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.