Uncertainty Estimation in Contrast-Enhanced MR Image Translation with
Multi-Axis Fusion
- URL: http://arxiv.org/abs/2311.12153v1
- Date: Mon, 20 Nov 2023 20:09:48 GMT
- Title: Uncertainty Estimation in Contrast-Enhanced MR Image Translation with
Multi-Axis Fusion
- Authors: Ivo M. Baltruschat, Parvaneh Janbakhshi, Melanie Dohmen, Matthias
Lenga
- Abstract summary: We propose a novel model uncertainty quantification method, Multi-Axis Fusion (MAF)
The proposed approach is applied to the task of synthesizing contrast enhanced T1-weighted images based on native T1, T2 and T2-FLAIR scans.
- Score: 6.727287631338148
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, deep learning has been applied to a wide range of medical
imaging and image processing tasks. In this work, we focus on the estimation of
epistemic uncertainty for 3D medical image-to-image translation. We propose a
novel model uncertainty quantification method, Multi-Axis Fusion (MAF), which
relies on the integration of complementary information derived from multiple
views on volumetric image data. The proposed approach is applied to the task of
synthesizing contrast enhanced T1-weighted images based on native T1, T2 and
T2-FLAIR scans. The quantitative findings indicate a strong correlation
($\rho_{\text healthy} = 0.89$) between the mean absolute image synthetization
error and the mean uncertainty score for our MAF method. Hence, we consider MAF
as a promising approach to solve the highly relevant task of detecting
synthetization failures at inference time.
Related papers
- Trustworthy Contrast-enhanced Brain MRI Synthesis [27.43375565176473]
Multi-modality medical image translation aims to synthesize CE-MRI images from other modalities.
We introduce TrustI2I, a novel trustworthy method that reformulates multi-to-one medical image translation problem as a multimodal regression problem.
arXiv Detail & Related papers (2024-07-10T05:17:01Z) - Simultaneous Tri-Modal Medical Image Fusion and Super-Resolution using Conditional Diffusion Model [2.507050016527729]
Tri-modal medical image fusion can provide a more comprehensive view of the disease's shape, location, and biological activity.
Due to the limitations of imaging equipment and considerations for patient safety, the quality of medical images is usually limited.
There is an urgent need for a technology that can both enhance image resolution and integrate multi-modal information.
arXiv Detail & Related papers (2024-04-26T12:13:41Z) - Cascaded Multi-path Shortcut Diffusion Model for Medical Image Translation [26.67518950976257]
We propose a Cascade Multi-path Shortcut Diffusion Model (CMDM) for high-quality medical image translation and uncertainty estimation.
Our experimental results found that CMDM can produce high-quality translations comparable to state-of-the-art methods.
arXiv Detail & Related papers (2024-04-06T03:02:47Z) - DDFM: Denoising Diffusion Model for Multi-Modality Image Fusion [144.9653045465908]
We propose a novel fusion algorithm based on the denoising diffusion probabilistic model (DDPM)
Our approach yields promising fusion results in infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2023-03-13T04:06:42Z) - Paired Image-to-Image Translation Quality Assessment Using Multi-Method
Fusion [0.0]
This paper proposes a novel approach that combines signals of image quality between paired source and transformation to predict the latter's similarity with a hypothetical ground truth.
We trained a Multi-Method Fusion (MMF) model via an ensemble of gradient-boosted regressors to predict Deep Image Structure and Texture Similarity (DISTS)
Analysis revealed the task to be feature-constrained, introducing a trade-off at inference between metric time and prediction accuracy.
arXiv Detail & Related papers (2022-05-09T11:05:15Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Coupled Feature Learning for Multimodal Medical Image Fusion [42.23662451234756]
Multimodal image fusion aims to combine relevant information from images acquired with different sensors.
In this paper, we propose a novel multimodal image fusion method based on coupled dictionary learning.
arXiv Detail & Related papers (2021-02-17T09:13:28Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Hyperspectral-Multispectral Image Fusion with Weighted LASSO [68.04032419397677]
We propose an approach for fusing hyperspectral and multispectral images to provide high-quality hyperspectral output.
We demonstrate that the proposed sparse fusion and reconstruction provides quantitatively superior results when compared to existing methods on publicly available images.
arXiv Detail & Related papers (2020-03-15T23:07:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.