An Explainable Deep Framework: Towards Task-Specific Fusion for
Multi-to-One MRI Synthesis
- URL: http://arxiv.org/abs/2307.00885v1
- Date: Mon, 3 Jul 2023 09:31:50 GMT
- Title: An Explainable Deep Framework: Towards Task-Specific Fusion for
Multi-to-One MRI Synthesis
- Authors: Luyi Han, Tianyu Zhang, Yunzhi Huang, Haoran Dou, Xin Wang, Yuan Gao,
Chunyao Lu, Tan Tao, Ritse Mann
- Abstract summary: Multi-sequence MRI is valuable in clinical settings for reliable diagnosis and treatment prognosis.
Recent deep learning-based methods have achieved good performance in combining multiple available sequences for missing sequence synthesis.
We propose an explainable task-specific synthesis network, which adapts weights automatically for specific sequence generation tasks.
- Score: 13.849499699377535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-sequence MRI is valuable in clinical settings for reliable diagnosis
and treatment prognosis, but some sequences may be unusable or missing for
various reasons. To address this issue, MRI synthesis is a potential solution.
Recent deep learning-based methods have achieved good performance in combining
multiple available sequences for missing sequence synthesis. Despite their
success, these methods lack the ability to quantify the contributions of
different input sequences and estimate the quality of generated images, making
it hard to be practical. Hence, we propose an explainable task-specific
synthesis network, which adapts weights automatically for specific sequence
generation tasks and provides interpretability and reliability from two sides:
(1) visualize the contribution of each input sequence in the fusion stage by a
trainable task-specific weighted average module; (2) highlight the area the
network tried to refine during synthesizing by a task-specific attention
module. We conduct experiments on the BraTS2021 dataset of 1251 subjects, and
results on arbitrary sequence synthesis indicate that the proposed method
achieves better performance than the state-of-the-art methods. Our code is
available at \url{https://github.com/fiy2W/mri_seq2seq}.
Related papers
- Non-Adversarial Learning: Vector-Quantized Common Latent Space for Multi-Sequence MRI [15.4894593374853]
We propose a generative model that compresses discrete representations of each sequence to estimate the Gaussian distribution of common latent space between sequences.
Experiments using BraTS2021 dataset show that our non-adversarial model outperforms other GAN-based methods.
arXiv Detail & Related papers (2024-07-03T08:37:01Z) - Comprehensive Generative Replay for Task-Incremental Segmentation with Concurrent Appearance and Semantic Forgetting [49.87694319431288]
Generalist segmentation models are increasingly favored for diverse tasks involving various objects from different image sources.
We propose a Comprehensive Generative (CGR) framework that restores appearance and semantic knowledge by synthesizing image-mask pairs.
Experiments on incremental tasks (cardiac, fundus and prostate segmentation) show its clear advantage for alleviating concurrent appearance and semantic forgetting.
arXiv Detail & Related papers (2024-06-28T10:05:58Z) - A Unified Framework for Synthesizing Multisequence Brain MRI via Hybrid Fusion [4.47838172826189]
We propose a novel unified framework for synthesizing multisequence MR images, called Hybrid Fusion GAN (HF-GAN)
We introduce a hybrid fusion encoder designed to ensure the disentangled extraction of complementary and modality-specific information.
Common feature representations are transformed into a target latent space via the modality infuser to synthesize missing MR sequences.
arXiv Detail & Related papers (2024-06-21T08:06:00Z) - Bi-Modality Medical Image Synthesis Using Semi-Supervised Sequential
Generative Adversarial Networks [35.358653509217994]
We propose a bi-modality medical image synthesis approach based on sequential generative adversarial network (GAN) and semi-supervised learning.
Our approach consists of two generative modules that synthesize images of the two modalities in a sequential order.
Visual and quantitative results demonstrate the superiority of our method to the state-of-the-art methods.
arXiv Detail & Related papers (2023-08-27T10:39:33Z) - Object Segmentation by Mining Cross-Modal Semantics [68.88086621181628]
We propose a novel approach by mining the Cross-Modal Semantics to guide the fusion and decoding of multimodal features.
Specifically, we propose a novel network, termed XMSNet, consisting of (1) all-round attentive fusion (AF), (2) coarse-to-fine decoder (CFD), and (3) cross-layer self-supervision.
arXiv Detail & Related papers (2023-05-17T14:30:11Z) - Synthesis-based Imaging-Differentiation Representation Learning for
Multi-Sequence 3D/4D MRI [16.725225424047256]
We propose a sequence-to-sequence generation framework (Seq2Seq) for imaging-differentiation representation learning.
In this study, not only do we propose arbitrary 3D/4D sequence generation within one model to generate any specified target sequence, but also we are able to rank the importance of each sequence.
We conduct extensive experiments using three datasets including a toy dataset of 20,000 simulated subjects, a brain MRI dataset of 1,251 subjects, and a breast MRI dataset of 2,101 subjects.
arXiv Detail & Related papers (2023-02-01T15:37:35Z) - Multi-scale Transformer Network with Edge-aware Pre-training for
Cross-Modality MR Image Synthesis [52.41439725865149]
Cross-modality magnetic resonance (MR) image synthesis can be used to generate missing modalities from given ones.
Existing (supervised learning) methods often require a large number of paired multi-modal data to train an effective synthesis model.
We propose a Multi-scale Transformer Network (MT-Net) with edge-aware pre-training for cross-modality MR image synthesis.
arXiv Detail & Related papers (2022-12-02T11:40:40Z) - Mutual Exclusivity Training and Primitive Augmentation to Induce
Compositionality [84.94877848357896]
Recent datasets expose the lack of the systematic generalization ability in standard sequence-to-sequence models.
We analyze this behavior of seq2seq models and identify two contributing factors: a lack of mutual exclusivity bias and the tendency to memorize whole examples.
We show substantial empirical improvements using standard sequence-to-sequence models on two widely-used compositionality datasets.
arXiv Detail & Related papers (2022-11-28T17:36:41Z) - A Novel Unified Conditional Score-based Generative Framework for
Multi-modal Medical Image Completion [54.512440195060584]
We propose the Unified Multi-Modal Conditional Score-based Generative Model (UMM-CSGM) to take advantage of Score-based Generative Model (SGM)
UMM-CSGM employs a novel multi-in multi-out Conditional Score Network (mm-CSN) to learn a comprehensive set of cross-modal conditional distributions.
Experiments on BraTS19 dataset show that the UMM-CSGM can more reliably synthesize the heterogeneous enhancement and irregular area in tumor-induced lesions.
arXiv Detail & Related papers (2022-07-07T16:57:21Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Deep Learning based Multi-modal Computing with Feature Disentanglement
for MRI Image Synthesis [8.363448006582065]
We propose a deep learning based multi-modal computing model for MRI synthesis with feature disentanglement strategy.
The proposed approach decomposes each input modality into modality-invariant space with shared information and modality-specific space with specific information.
To address the lack of specific information of the target modality in the test phase, a local adaptive fusion (LAF) module is adopted to generate a modality-like pseudo-target.
arXiv Detail & Related papers (2021-05-06T17:22:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.