Progressively Volumetrized Deep Generative Models for Data-Efficient
Contextual Learning of MR Image Recovery
- URL: http://arxiv.org/abs/2011.13913v4
- Date: Sat, 12 Mar 2022 11:36:28 GMT
- Title: Progressively Volumetrized Deep Generative Models for Data-Efficient
Contextual Learning of MR Image Recovery
- Authors: Mahmut Yurt, Muzaffer \"Ozbey, Salman Ul Hassan Dar, Berk T{\i}naz,
Kader Karl{\i} O\u{g}uz, Tolga \c{C}ukur
- Abstract summary: We introduce a novel progressive volumetrization strategy for generative models (ProvoGAN)
ProvoGAN serially decomposes complex volumetric image recovery tasks into successive cross-sectional mappings task-optimally ordered across individual rectilinear dimensions.
Comprehensive demonstrations on mainstream MRI reconstruction and synthesis tasks show that ProvoGAN yields superior performance to state-of-the-art volumetric and cross-sectional models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Magnetic resonance imaging (MRI) offers the flexibility to image a given
anatomic volume under a multitude of tissue contrasts. Yet, scan time
considerations put stringent limits on the quality and diversity of MRI data.
The gold-standard approach to alleviate this limitation is to recover
high-quality images from data undersampled across various dimensions, most
commonly the Fourier domain or contrast sets. A primary distinction among
recovery methods is whether the anatomy is processed per volume or per
cross-section. Volumetric models offer enhanced capture of global contextual
information, but they can suffer from suboptimal learning due to elevated model
complexity. Cross-sectional models with lower complexity offer improved
learning behavior, yet they ignore contextual information across the
longitudinal dimension of the volume. Here, we introduce a novel progressive
volumetrization strategy for generative models (ProvoGAN) that serially
decomposes complex volumetric image recovery tasks into successive
cross-sectional mappings task-optimally ordered across individual rectilinear
dimensions. ProvoGAN effectively captures global context and recovers
fine-structural details across all dimensions, while maintaining low model
complexity and improved learning behaviour. Comprehensive demonstrations on
mainstream MRI reconstruction and synthesis tasks show that ProvoGAN yields
superior performance to state-of-the-art volumetric and cross-sectional models.
Related papers
- Zero-shot Dynamic MRI Reconstruction with Global-to-local Diffusion Model [17.375064910924717]
We propose a dynamic MRI reconstruction method based on a time-interleaved acquisition scheme, termed the Glob-al-to-local Diffusion Model.
The proposed method performs well in terms of noise reduction and preservation, achieving reconstruction quality comparable to that of supervised approaches.
arXiv Detail & Related papers (2024-11-06T07:40:27Z) - A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
Deep neural networks have shown great potential for reconstructing high-fidelity images from undersampled measurements.
Our model is based on neural operators, a discretization-agnostic architecture.
Our inference speed is also 1,400x faster than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - TC-KANRecon: High-Quality and Accelerated MRI Reconstruction via Adaptive KAN Mechanisms and Intelligent Feature Scaling [7.281993256973667]
This study presents an innovative conditional guided diffusion model, named as TC-KANRecon.
It incorporates the Multi-Free U-KAN (MF-UKAN) module and a dynamic clipping strategy.
Experimental results demonstrate that the proposed method outperforms other MRI reconstruction methods in both qualitative and quantitative evaluations.
arXiv Detail & Related papers (2024-08-11T06:31:56Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - Unsupervised Adaptive Implicit Neural Representation Learning for
Scan-Specific MRI Reconstruction [8.721677700107639]
We propose an unsupervised, adaptive coarse-to-fine framework that enhances reconstruction quality without being constrained by the sparsity levels or patterns in under-sampling.
We integrate a novel learning strategy that progressively refines the use of acquired k-space signals for self-supervision.
Our method outperforms current state-of-the-art scan-specific MRI reconstruction techniques, for up to 8-fold under-sampling.
arXiv Detail & Related papers (2023-12-01T16:00:16Z) - One for Multiple: Physics-informed Synthetic Data Boosts Generalizable
Deep Learning for Fast MRI Reconstruction [20.84830225817378]
Deep Learning (DL) has proven effective for fast MRI image reconstruction, but its broader applicability has been constrained.
We present a novel Physics-Informed Synthetic data learning framework for Fast MRI, called PISF.
PISF marks a breakthrough by enabling generalized DL for multi-scenario MRI reconstruction through a single trained model.
arXiv Detail & Related papers (2023-07-25T03:11:24Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Multimodal-Boost: Multimodal Medical Image Super-Resolution using
Multi-Attention Network with Wavelet Transform [5.416279158834623]
Loss of corresponding image resolution degrades the overall performance of medical image diagnosis.
Deep learning based single image super resolution (SISR) algorithms has revolutionized the overall diagnosis framework.
This work proposes generative adversarial network (GAN) with deep multi-attention modules to learn high-frequency information from low-frequency data.
arXiv Detail & Related papers (2021-10-22T10:13:46Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.