A Novel Unified Conditional Score-based Generative Framework for
Multi-modal Medical Image Completion
- URL: http://arxiv.org/abs/2207.03430v1
- Date: Thu, 7 Jul 2022 16:57:21 GMT
- Title: A Novel Unified Conditional Score-based Generative Framework for
Multi-modal Medical Image Completion
- Authors: Xiangxi Meng, Yuning Gu, Yongsheng Pan, Nizhuan Wang, Peng Xue,
Mengkang Lu, Xuming He, Yiqiang Zhan and Dinggang Shen
- Abstract summary: We propose the Unified Multi-Modal Conditional Score-based Generative Model (UMM-CSGM) to take advantage of Score-based Generative Model (SGM)
UMM-CSGM employs a novel multi-in multi-out Conditional Score Network (mm-CSN) to learn a comprehensive set of cross-modal conditional distributions.
Experiments on BraTS19 dataset show that the UMM-CSGM can more reliably synthesize the heterogeneous enhancement and irregular area in tumor-induced lesions.
- Score: 54.512440195060584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-modal medical image completion has been extensively applied to
alleviate the missing modality issue in a wealth of multi-modal diagnostic
tasks. However, for most existing synthesis methods, their inferences of
missing modalities can collapse into a deterministic mapping from the available
ones, ignoring the uncertainties inherent in the cross-modal relationships.
Here, we propose the Unified Multi-Modal Conditional Score-based Generative
Model (UMM-CSGM) to take advantage of Score-based Generative Model (SGM) in
modeling and stochastically sampling a target probability distribution, and
further extend SGM to cross-modal conditional synthesis for various
missing-modality configurations in a unified framework. Specifically, UMM-CSGM
employs a novel multi-in multi-out Conditional Score Network (mm-CSN) to learn
a comprehensive set of cross-modal conditional distributions via conditional
diffusion and reverse generation in the complete modality space. In this way,
the generation process can be accurately conditioned by all available
information, and can fit all possible configurations of missing modalities in a
single network. Experiments on BraTS19 dataset show that the UMM-CSGM can more
reliably synthesize the heterogeneous enhancement and irregular area in
tumor-induced lesions for any missing modalities.
Related papers
- Modality Prompts for Arbitrary Modality Salient Object Detection [57.610000247519196]
This paper delves into the task of arbitrary modality salient object detection (AM SOD)
It aims to detect salient objects from arbitrary modalities, eg RGB images, RGB-D images, and RGB-D-T images.
A novel modality-adaptive Transformer (MAT) will be proposed to investigate two fundamental challenges of AM SOD.
arXiv Detail & Related papers (2024-05-06T11:02:02Z) - Federated Pseudo Modality Generation for Incomplete Multi-Modal MRI
Reconstruction [26.994070472726357]
Fed-PMG is a novel communication-efficient federated learning framework.
We propose a pseudo modality generation mechanism to recover the missing modality for each single-modal client.
Our approach can effectively complete the missing modality within an acceptable communication cost.
arXiv Detail & Related papers (2023-08-20T03:38:59Z) - Unified Multi-Modal Image Synthesis for Missing Modality Imputation [23.681228202899984]
We propose a novel unified multi-modal image synthesis method for missing modality imputation.
The proposed method is effective in handling various synthesis tasks and shows superior performance compared to previous methods.
arXiv Detail & Related papers (2023-04-11T16:59:15Z) - Exploiting modality-invariant feature for robust multimodal emotion
recognition with missing modalities [76.08541852988536]
We propose to use invariant features for a missing modality imagination network (IF-MMIN)
We show that the proposed model outperforms all baselines and invariantly improves the overall emotion recognition performance under uncertain missing-modality conditions.
arXiv Detail & Related papers (2022-10-27T12:16:25Z) - A Learnable Variational Model for Joint Multimodal MRI Reconstruction
and Synthesis [4.056490719080639]
We propose a novel deep-learning model for joint reconstruction and synthesis of multi-modal MRI.
The output of our model includes reconstructed images of the source modalities and high-quality image synthesized in the target modality.
arXiv Detail & Related papers (2022-04-08T01:35:19Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - A Multi-Semantic Metapath Model for Large Scale Heterogeneous Network
Representation Learning [52.83948119677194]
We propose a multi-semantic metapath (MSM) model for large scale heterogeneous representation learning.
Specifically, we generate multi-semantic metapath-based random walks to construct the heterogeneous neighborhood to handle the unbalanced distributions.
We conduct systematical evaluations for the proposed framework on two challenging datasets: Amazon and Alibaba.
arXiv Detail & Related papers (2020-07-19T22:50:20Z) - Hi-Net: Hybrid-fusion Network for Multi-modal MR Image Synthesis [143.55901940771568]
We propose a novel Hybrid-fusion Network (Hi-Net) for multi-modal MR image synthesis.
In our Hi-Net, a modality-specific network is utilized to learn representations for each individual modality.
A multi-modal synthesis network is designed to densely combine the latent representation with hierarchical features from each modality.
arXiv Detail & Related papers (2020-02-11T08:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.