Efficient Vision Mamba for MRI Super-Resolution via Hybrid Selective Scanning
- URL: http://arxiv.org/abs/2512.19676v2
- Date: Thu, 25 Dec 2025 22:41:51 GMT
- Title: Efficient Vision Mamba for MRI Super-Resolution via Hybrid Selective Scanning
- Authors: Mojtaba Safari, Shansong Wang, Vanessa L Wildman, Mingzhe Hu, Zach Eidex, Chih-Wei Chang, Erik H Middlebrooks, Richard L. J Qiu, Pretesh Patel, Ashesh B. Jani, Hui Mao, Zhen Tian, Xiaofeng Yang,
- Abstract summary: Superresolution MRI can enhance resolution post-scan, yet deep learning methods face tradeoffs.<n>We propose a novel framework combining multi-head selective state-space models with a lightweight channel.<n>Model achieved superior performance with exceptional efficiency.
- Score: 5.1712742264130815
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background: High-resolution MRI is critical for diagnosis, but long acquisition times limit clinical use. Super-resolution (SR) can enhance resolution post-scan, yet existing deep learning methods face fidelity-efficiency trade-offs. Purpose: To develop a computationally efficient and accurate deep learning framework for MRI SR that preserves anatomical detail for clinical integration. Materials and Methods: We propose a novel SR framework combining multi-head selective state-space models (MHSSM) with a lightweight channel MLP. The model uses 2D patch extraction with hybrid scanning to capture long-range dependencies. Each MambaFormer block integrates MHSSM, depthwise convolutions, and gated channel mixing. Evaluation used 7T brain T1 MP2RAGE maps (n=142) and 1.5T prostate T2w MRI (n=334). Comparisons included Bicubic interpolation, GANs (CycleGAN, Pix2pix, SPSR), transformers (SwinIR), Mamba (MambaIR), and diffusion models (I2SB, Res-SRDiff). Results: Our model achieved superior performance with exceptional efficiency. For 7T brain data: SSIM=0.951+-0.021, PSNR=26.90+-1.41 dB, LPIPS=0.076+-0.022, GMSD=0.083+-0.017, significantly outperforming all baselines (p<0.001). For prostate data: SSIM=0.770+-0.049, PSNR=27.15+-2.19 dB, LPIPS=0.190+-0.095, GMSD=0.087+-0.013. The framework used only 0.9M parameters and 57 GFLOPs, reducing parameters by 99.8% and computation by 97.5% versus Res-SRDiff, while outperforming SwinIR and MambaIR in accuracy and efficiency. Conclusion: The proposed framework provides an efficient, accurate MRI SR solution, delivering enhanced anatomical detail across datasets. Its low computational demand and state-of-the-art performance show strong potential for clinical translation.
Related papers
- Quantitative mapping from conventional MRI using self-supervised physics-guided deep learning: applications to a large-scale, clinically heterogeneous dataset [32.995373978092665]
This study presents a self-supervised physics-guided deep learning framework to infer quantitative T1, T2, and proton-density maps.<n>The framework was trained and evaluated on a large-scale, clinically heterogeneous dataset.
arXiv Detail & Related papers (2026-01-08T16:08:58Z) - UltraLBM-UNet: Ultralight Bidirectional Mamba-based Model for Skin Lesion Segmentation [34.50069854212544]
We propose UltraLBM-UNet, a lightweight U-Net variant that integrates a bidirectional Mamba-based global modeling mechanism with multi-branch local feature perception.<n>Our model consistently achieves state-of-the-art segmentation accuracy, outperforming existing lightweight and Mamba counterparts with only 0.034M parameters and 0.060 GFLOPs.<n>These results highlight the suitability of UltraLBM-UNet for point-of-care deployment, where accurate and robust lesion analyses are essential.
arXiv Detail & Related papers (2025-12-25T09:05:02Z) - Squeezed-Eff-Net: Edge-Computed Boost of Tomography Based Brain Tumor Classification leveraging Hybrid Neural Network Architecture [0.7829352305480285]
This work proposes a hybrid deep learning model based on SqueezeNet v1 which is a lightweight model, and EfficientNet-B0, which is a high-performing model.<n>The framework was trained and tested only on publicly available Nickparvar Brain Tumor MRI dataset.
arXiv Detail & Related papers (2025-12-08T07:37:30Z) - MMRINet: Efficient Mamba-Based Segmentation with Dual-Path Refinement for Low-Resource MRI Analysis [2.6992900249585765]
MMRINet is a lightweight architecture that replaces quadratic-complexity attention with linear-complexity Mamba state-space models.<n>In the BraTS-Lighthouse SSA 2025, our model achieves strong volumetric performance with an average Dice score of (0.752) and an average HD95 of (12.23) with only 2.5M parameters.
arXiv Detail & Related papers (2025-11-15T12:57:25Z) - Pattern-Aware Diffusion Synthesis of fMRI/dMRI with Tissue and Microstructural Refinement [34.55493442995441]
We propose PDS, a pattern-aware dual-modal 3D diffusion framework for cross-modality learning.<n>We also introduce a tissue refinement network integrated with a efficient microstructure refinement to maintain structural fidelity and fine details.<n>PDS achieves state-of-the-art results, with PSNR/SSIM scores of 29.83 dB/90.84% for fMRI synthesis and 30.00 dB/77.55% for dMRI synthesis.
arXiv Detail & Related papers (2025-11-07T03:51:00Z) - DRBD-Mamba for Robust and Efficient Brain Tumor Segmentation with Analytical Insights [54.87947751720332]
Accurate brain tumor segmentation is significant for clinical diagnosis and treatment.<n>Mamba-based State Space Models have demonstrated promising performance.<n>We propose a dual-resolution bi-directional Mamba that captures multi-scale long-range dependencies with minimal computational overhead.
arXiv Detail & Related papers (2025-10-16T07:31:21Z) - Resource-Efficient Glioma Segmentation on Sub-Saharan MRI [4.522693679811991]
This study introduces a robust and computationally efficient deep learning framework tailored for resource-quality settings.<n>We leveraged a 3D Attention UNet architecture augmented with residual blocks and enhanced through transfer learning from pre-trained weights on the BraTS-Africa dataset.<n>Our model was evaluated on 95 MRI cases from the BraTS-Africa dataset, a benchmark for glioma segmentation in SSA MRI data.
arXiv Detail & Related papers (2025-09-11T13:52:47Z) - HepatoGEN: Generating Hepatobiliary Phase MRI with Perceptual and Adversarial Models [33.7054351451505]
We propose a deep learning based approach for synthesizing hepatobiliary phase (HBP) images from earlier contrast phases.<n> Quantitative evaluation using pixel-wise and perceptual metrics, combined with blinded radiologist reviews, showed that pGAN achieved the best quantitative performance.<n>In contrast, the U-Net produced consistent liver enhancement with fewer artifacts, while DDPM underperformed due to limited preservation of fine structural details.
arXiv Detail & Related papers (2025-04-25T15:01:09Z) - GBT-SAM: Adapting a Foundational Deep Learning Model for Generalizable Brain Tumor Segmentation via Efficient Integration of Multi-Parametric MRI Data [5.7802171590699984]
We present GBT-SAM, a parameter-efficient deep learning framework that adapts the Segment Anything Model to mp-MRI data.<n>Our model is trained by a two-step fine-tuning strategy that incorporates a depth-aware module to capture inter-slice correlations.<n>It achieves a Dice Score of 93.54 on the BraTS Adult Glioma dataset and demonstrates robust performance on Meningioma, Pediatric Glioma, and Sub-Saharan Glioma datasets.
arXiv Detail & Related papers (2025-03-06T11:18:22Z) - A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
We propose a unified MRI reconstruction model robust to various measurement undersampling patterns and image resolutions.<n>Our model improves SSIM by 11% and PSNR by 4 dB over a state-of-the-art CNN (End-to-End VarNet) with 600$times$ faster inference than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Unpaired MRI Super Resolution with Contrastive Learning [33.65350200042909]
Deep learning-based image super-resolution methods exhibit promise in improving MRI resolution without additional cost.
Due to lacking of aligned high-resolution (HR) and low-resolution (LR) MRI image pairs, unsupervised approaches are widely adopted for SR reconstruction with unpaired MRI images.
We propose an unpaired MRI SR approach that employs contrastive learning to enhance SR performance with limited HR data.
arXiv Detail & Related papers (2023-10-24T12:13:51Z) - Hybrid Window Attention Based Transformer Architecture for Brain Tumor
Segmentation [28.650980942429726]
We propose a volumetric vision transformer that follows two windowing strategies in attention for extracting fine features.
We trained and evaluated network architecture on the FeTS Challenge 2022 dataset.
Our performance on the online validation dataset is as follows: Dice Similarity Score of 81.71%, 91.38% and 85.40%.
arXiv Detail & Related papers (2022-09-16T03:55:48Z) - ShuffleUNet: Super resolution of diffusion-weighted MRIs using deep
learning [47.68307909984442]
Single Image Super-Resolution (SISR) is a technique aimed to obtain high-resolution (HR) details from one single low-resolution input image.
Deep learning extracts prior knowledge from big datasets and produces superior MRI images from the low-resolution counterparts.
arXiv Detail & Related papers (2021-02-25T14:52:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.