Uncertainty-Aware Multi-Parametric Magnetic Resonance Image Information
Fusion for 3D Object Segmentation
- URL: http://arxiv.org/abs/2211.08783v1
- Date: Wed, 16 Nov 2022 09:16:52 GMT
- Title: Uncertainty-Aware Multi-Parametric Magnetic Resonance Image Information
Fusion for 3D Object Segmentation
- Authors: Cheng Li, Yousuf Babiker M. Osman, Weijian Huang, Zhenzhen Xue, Hua
Han, Hairong Zheng, Shanshan Wang
- Abstract summary: We propose an uncertainty-aware multi-parametric MR image feature fusion method to fully exploit the information for enhanced 3D image segmentation.
Our proposed method achieves better segmentation performance when compared to existing models.
- Score: 12.361668672097753
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Multi-parametric magnetic resonance (MR) imaging is an indispensable tool in
the clinic. Consequently, automatic volume-of-interest segmentation based on
multi-parametric MR imaging is crucial for computer-aided disease diagnosis,
treatment planning, and prognosis monitoring. Despite the extensive studies
conducted in deep learning-based medical image analysis, further investigations
are still required to effectively exploit the information provided by different
imaging parameters. How to fuse the information is a key question in this
field. Here, we propose an uncertainty-aware multi-parametric MR image feature
fusion method to fully exploit the information for enhanced 3D image
segmentation. Uncertainties in the independent predictions of individual
modalities are utilized to guide the fusion of multi-modal image features.
Extensive experiments on two datasets, one for brain tissue segmentation and
the other for abdominal multi-organ segmentation, have been conducted, and our
proposed method achieves better segmentation performance when compared to
existing models.
Related papers
- Multi-sensor Learning Enables Information Transfer across Different Sensory Data and Augments Multi-modality Imaging [21.769547352111957]
We investigate a data-driven multi-modality imaging (DMI) strategy for synergetic imaging of CT and MRI.
We reveal two distinct types of features in multi-modality imaging, namely intra- and inter-modality features, and present a multi-sensor learning (MSL) framework.
We showcase the effectiveness of our DMI strategy through synergetic CT-MRI brain imaging.
arXiv Detail & Related papers (2024-09-28T17:40:54Z) - QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion
Classification Using 3D Multi-Phase Imaging [59.78761085714715]
This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework for liver lesion classification.
The proposed framework has been validated through comprehensive experiments on two clinical datasets.
To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public.
arXiv Detail & Related papers (2024-02-27T06:32:56Z) - Three-Dimensional Medical Image Fusion with Deformable Cross-Attention [10.26573411162757]
Multimodal medical image fusion plays an instrumental role in several areas of medical image processing.
Traditional fusion methods tend to process each modality independently before combining the features and reconstructing the fusion image.
In this study, we introduce an innovative unsupervised feature mutual learning fusion network designed to rectify these limitations.
arXiv Detail & Related papers (2023-10-10T04:10:56Z) - Modality-Agnostic Learning for Medical Image Segmentation Using
Multi-modality Self-distillation [1.815047691981538]
We propose a novel framework, Modality-Agnostic learning through Multi-modality Self-dist-illation (MAG-MS)
MAG-MS distills knowledge from the fusion of multiple modalities and applies it to enhance representation learning for individual modalities.
Our experiments on benchmark datasets demonstrate the high efficiency of MAG-MS and its superior segmentation performance.
arXiv Detail & Related papers (2023-06-06T14:48:50Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - Generalizable multi-task, multi-domain deep segmentation of sparse
pediatric imaging datasets via multi-scale contrastive regularization and
multi-joint anatomical priors [0.41998444721319217]
We propose to design a novel multi-task, multi-domain learning framework in which a single segmentation network is optimized over multiple datasets.
We evaluate our contributions for performing bone segmentation using three scarce and pediatric imaging datasets of the ankle, knee, and shoulder joints.
arXiv Detail & Related papers (2022-07-27T12:59:16Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Cross-Modal Self-Attention Distillation for Prostate Cancer Segmentation [1.630747108038841]
How to use the multi-modal image features more efficiently is still a challenging problem in the field of medical image segmentation.
We develop a cross-modal self-attention distillation network by fully exploiting the encoded information of the intermediate layers from different modalities.
We evaluate our model in five-fold cross-validation on 358 MRI with biopsy confirmed.
arXiv Detail & Related papers (2020-11-08T06:19:13Z) - Cross-Modal Information Maximization for Medical Imaging: CMIM [62.28852442561818]
In hospitals, data are siloed to specific information systems that make the same information available under different modalities.
This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time.
We propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time.
arXiv Detail & Related papers (2020-10-20T20:05:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.