Mamba-Based Modality Disentanglement Network for Multi-Contrast MRI Reconstruction
- URL: http://arxiv.org/abs/2512.19095v1
- Date: Mon, 22 Dec 2025 07:06:34 GMT
- Title: Mamba-Based Modality Disentanglement Network for Multi-Contrast MRI Reconstruction
- Authors: Weiyi Lyu, Xinming Fang, Jun Wang, Jun Shi, Guixu Zhang, Juncheng Li,
- Abstract summary: MambaMDN is a dual-domain framework for multi-contrast MRI reconstruction.<n>Our approach first employs fully-sampled reference K-space data to complete the undersampled target data.<n>We develop a Mamba-based modality disentanglement network to extract and remove reference-specific features from the mixed representation.
- Score: 23.393652726101433
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Magnetic resonance imaging (MRI) is a cornerstone of modern clinical diagnosis, offering unparalleled soft-tissue contrast without ionizing radiation. However, prolonged scan times remain a major barrier to patient throughput and comfort. Existing accelerated MRI techniques often struggle with two key challenges: (1) failure to effectively utilize inherent K-space prior information, leading to persistent aliasing artifacts from zero-filled inputs; and (2) contamination of target reconstruction quality by irrelevant information when employing multi-contrast fusion strategies. To overcome these challenges, we present MambaMDN, a dual-domain framework for multi-contrast MRI reconstruction. Our approach first employs fully-sampled reference K-space data to complete the undersampled target data, generating structurally aligned but modality-mixed inputs. Subsequently, we develop a Mamba-based modality disentanglement network to extract and remove reference-specific features from the mixed representation. Furthermore, we introduce an iterative refinement mechanism to progressively enhance reconstruction accuracy through repeated feature purification. Extensive experiments demonstrate that MambaMDN can significantly outperform existing multi-contrast reconstruction methods.
Related papers
- Frequency Error-Guided Under-sampling Optimization for Multi-Contrast MRI Reconstruction [24.246450246745905]
Multi-contrast MRI reconstruction has emerged as a promising direction by leveraging complementary information from fully-sampled reference scans.<n>Existing approaches suffer from three major limitations: (1) superficial reference fusion strategies, (2) insufficient utilization of the complementary information provided by the reference contrast, and (3) fixed under-sampling patterns.<n>We propose an efficient and interpretable frequency error-guided reconstruction framework to tackle these issues.
arXiv Detail & Related papers (2026-01-14T09:40:34Z) - ContextMRI: Enhancing Compressed Sensing MRI through Metadata Conditioning [51.26601171361753]
We propose ContextMRI, a text-conditioned diffusion model for MRI that integrates granular metadata into the reconstruction process.<n>We show that increasing the fidelity of metadata, ranging from slice location and contrast to patient age, sex, and pathology, systematically boosts reconstruction performance.
arXiv Detail & Related papers (2025-01-08T05:15:43Z) - LDPM: Towards undersampled MRI reconstruction with MR-VAE and Latent Diffusion Prior [4.499605583818247]
Some works attempted to solve MRI reconstruction with diffusion models, but these methods operate directly in pixel space.<n>Latent diffusion models, pre-trained on natural images with rich visual priors, are expected to solve the high computational cost problem in MRI reconstruction.<n>A novel Latent Diffusion Prior-based undersampled MRI reconstruction (LDPM) method is proposed.
arXiv Detail & Related papers (2024-11-05T09:51:59Z) - Fill the K-Space and Refine the Image: Prompting for Dynamic and
Multi-Contrast MRI Reconstruction [31.404228406642194]
The key to dynamic or multi-contrast magnetic resonance imaging (MRI) reconstruction lies in exploring inter-frame or inter-contrast information.
We propose a two-stage MRI reconstruction pipeline to address these limitations.
Our proposed method significantly outperforms previous state-of-the-art accelerated MRI reconstruction methods.
arXiv Detail & Related papers (2023-09-25T02:51:00Z) - Improved Multi-Shot Diffusion-Weighted MRI with Zero-Shot
Self-Supervised Learning Reconstruction [7.347468593124183]
We introduce a novel msEPI reconstruction approach called zero-MIRID (zero-shot self-supervised learning of Multi-shot Image Reconstruction for Improved Diffusion MRI)
This method jointly reconstructs msEPI data by incorporating deep learning-based image regularization techniques.
It achieves superior results compared to the state-of-the-art parallel imaging method, as demonstrated in an in-vivo experiment.
arXiv Detail & Related papers (2023-08-09T17:54:56Z) - Dual-Domain Self-Supervised Learning for Accelerated Non-Cartesian MRI
Reconstruction [14.754843942604472]
We present a fully self-supervised approach for accelerated non-Cartesian MRI reconstruction.
In training, the undersampled data are split into disjoint k-space domain partitions.
For the image-level self-supervision, we enforce appearance consistency obtained from the original undersampled data.
arXiv Detail & Related papers (2023-02-18T06:11:49Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - Reference-based Magnetic Resonance Image Reconstruction Using Texture
Transforme [86.6394254676369]
We propose a novel Texture Transformer Module (TTM) for accelerated MRI reconstruction.
We formulate the under-sampled data and reference data as queries and keys in a transformer.
The proposed TTM can be stacked on prior MRI reconstruction approaches to further improve their performance.
arXiv Detail & Related papers (2021-11-18T03:06:25Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Multi-Modal MRI Reconstruction with Spatial Alignment Network [51.74078260367654]
In clinical practice, magnetic resonance imaging (MRI) with multiple contrasts is usually acquired in a single study.
Recent researches demonstrate that, considering the redundancy between different contrasts or modalities, a target MRI modality under-sampled in the k-space can be better reconstructed with the helps from a fully-sampled sequence.
In this paper, we integrate the spatial alignment network with reconstruction, to improve the quality of the reconstructed target modality.
arXiv Detail & Related papers (2021-08-12T08:46:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.