IMPORTANT-Net: Integrated MRI Multi-Parameter Reinforcement Fusion
Generator with Attention Network for Synthesizing Absent Data
- URL: http://arxiv.org/abs/2302.01788v1
- Date: Fri, 3 Feb 2023 14:56:10 GMT
- Title: IMPORTANT-Net: Integrated MRI Multi-Parameter Reinforcement Fusion
Generator with Attention Network for Synthesizing Absent Data
- Authors: Tianyu Zhang, Tao Tan, Luyi Han, Xin Wang, Yuan Gao, Jonas Teuwen,
Regina Beets-Tan, Ritse Mann
- Abstract summary: We develop a novel $textbfI$ntegrated MRI $textbfM$ulti-$textbfP$arameter reinf$textbfO$rcement fusion generato$textbfR$ wi$textbfT$h.
We show that our IMPORTANT-Net is capable of generating missing MRI parameters and outperforms comparable state-of-the-art networks.
- Score: 16.725225424047256
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Magnetic resonance imaging (MRI) is highly sensitive for lesion detection in
the breasts. Sequences obtained with different settings can capture the
specific characteristics of lesions. Such multi-parameter MRI information has
been shown to improve radiologist performance in lesion classification, as well
as improving the performance of artificial intelligence models in various
tasks. However, obtaining multi-parameter MRI makes the examination costly in
both financial and time perspectives, and there may be safety concerns for
special populations, thus making acquisition of the full spectrum of MRI
sequences less durable. In this study, different than naive input fusion or
feature concatenation from existing MRI parameters, a novel
$\textbf{I}$ntegrated MRI $\textbf{M}$ulti-$\textbf{P}$arameter
reinf$\textbf{O}$rcement fusion generato$\textbf{R}$ wi$\textbf{T}$h
$\textbf{A}$tte$\textbf{NT}$ion Network (IMPORTANT-Net) is developed to
generate missing parameters. First, the parameter reconstruction module is used
to encode and restore the existing MRI parameters to obtain the corresponding
latent representation information at any scale level. Then the multi-parameter
fusion with attention module enables the interaction of the encoded information
from different parameters through a set of algorithmic strategies, and applies
different weights to the information through the attention mechanism after
information fusion to obtain refined representation information. Finally, a
reinforcement fusion scheme embedded in a $V^{-}$-shape generation module is
used to combine the hierarchical representations to generate the missing MRI
parameter. Results showed that our IMPORTANT-Net is capable of generating
missing MRI parameters and outperforms comparable state-of-the-art networks.
Our code is available at
https://github.com/Netherlands-Cancer-Institute/MRI_IMPORTANT_NET.
Related papers
- Unifying Subsampling Pattern Variations for Compressed Sensing MRI with Neural Operators [72.79532467687427]
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled and compressed measurements.
Deep neural networks have shown great potential for reconstructing high-quality images from highly undersampled measurements.
We propose a unified model that is robust to different subsampling patterns and image resolutions in CS-MRI.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - A Unified Framework for Synthesizing Multisequence Brain MRI via Hybrid Fusion [4.47838172826189]
We propose a novel unified framework for synthesizing multisequence MR images, called Hybrid Fusion GAN (HF-GAN)
We introduce a hybrid fusion encoder designed to ensure the disentangled extraction of complementary and modality-specific information.
Common feature representations are transformed into a target latent space via the modality infuser to synthesize missing MR sequences.
arXiv Detail & Related papers (2024-06-21T08:06:00Z) - MindFormer: Semantic Alignment of Multi-Subject fMRI for Brain Decoding [50.55024115943266]
We introduce a novel semantic alignment method of multi-subject fMRI signals using so-called MindFormer.
This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model for fMRI- to-image generation or large language model (LLM) for fMRI-to-text generation.
Our experimental results demonstrate that MindFormer generates semantically consistent images and text across different subjects.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - Source-Free Collaborative Domain Adaptation via Multi-Perspective
Feature Enrichment for Functional MRI Analysis [55.03872260158717]
Resting-state MRI functional (rs-fMRI) is increasingly employed in multi-site research to aid neurological disorder analysis.
Many methods have been proposed to reduce fMRI heterogeneity between source and target domains.
But acquiring source data is challenging due to concerns and/or data storage burdens in multi-site studies.
We design a source-free collaborative domain adaptation framework for fMRI analysis, where only a pretrained source model and unlabeled target data are accessible.
arXiv Detail & Related papers (2023-08-24T01:30:18Z) - Two-stage MR Image Segmentation Method for Brain Tumors based on
Attention Mechanism [27.08977505280394]
A coordination-spatial attention generation adversarial network (CASP-GAN) based on the cycle-consistent generative adversarial network (CycleGAN) is proposed.
The performance of the generator is optimized by introducing the Coordinate Attention (CA) module and the Spatial Attention (SA) module.
The ability to extract the structure information and the detailed information of the original medical image can help generate the desired image with higher quality.
arXiv Detail & Related papers (2023-04-17T08:34:41Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Multi-head Cascaded Swin Transformers with Attention to k-space Sampling
Pattern for Accelerated MRI Reconstruction [16.44971774468092]
We propose a physics-based stand-alone (convolution free) transformer model titled, the Multi-head Cascaded Swin Transformers (McSTRA) for accelerated MRI reconstruction.
Our model significantly outperforms state-of-the-art MRI reconstruction methods both visually and quantitatively.
arXiv Detail & Related papers (2022-07-18T07:21:56Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - Edge-Enhanced Dual Discriminator Generative Adversarial Network for Fast
MRI with Parallel Imaging Using Multi-view Information [10.616409735438756]
We introduce a novel parallel imaging coupled dual discriminator generative adversarial network (PIDD-GAN) for fast multi-channel MRI reconstruction.
One discriminator is used for holistic image reconstruction, whereas the other one is responsible for enhancing edge information.
Results show that our PIDD-GAN provides high-quality reconstructed MR images, with well-preserved edge information.
arXiv Detail & Related papers (2021-12-10T10:49:26Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial
Transformers [0.0]
We introduce a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adrial TransformERs (SLATER)
A zero-shot reconstruction is performed on undersampled test data, where inference is performed by optimizing network parameters.
Experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against several state-of-the-art unsupervised methods.
arXiv Detail & Related papers (2021-05-15T02:01:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.