Faster, Self-Supervised Super-Resolution for Anisotropic Multi-View MRI Using a Sparse Coordinate Loss
- URL: http://arxiv.org/abs/2509.07798v1
- Date: Tue, 09 Sep 2025 14:38:30 GMT
- Title: Faster, Self-Supervised Super-Resolution for Anisotropic Multi-View MRI Using a Sparse Coordinate Loss
- Authors: Maja Schlereth, Moritz Schillinger, Katharina Breininger,
- Abstract summary: We propose a novel approach for fusing two anisotropic LR MR images to reconstruct anatomical details in a unified representation.<n>Our multi-view neural network is trained in a self-supervised manner, without requiring corresponding high-resolution (HR) data.<n>Our results demonstrate comparable or even improved super-resolution (SR) performance compared to state-of-the-art (SOTA) self-supervised SR methods.
- Score: 0.39508022083907396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Acquiring images in high resolution is often a challenging task. Especially in the medical sector, image quality has to be balanced with acquisition time and patient comfort. To strike a compromise between scan time and quality for Magnetic Resonance (MR) imaging, two anisotropic scans with different low-resolution (LR) orientations can be acquired. Typically, LR scans are analyzed individually by radiologists, which is time consuming and can lead to inaccurate interpretation. To tackle this, we propose a novel approach for fusing two orthogonal anisotropic LR MR images to reconstruct anatomical details in a unified representation. Our multi-view neural network is trained in a self-supervised manner, without requiring corresponding high-resolution (HR) data. To optimize the model, we introduce a sparse coordinate-based loss, enabling the integration of LR images with arbitrary scaling. We evaluate our method on MR images from two independent cohorts. Our results demonstrate comparable or even improved super-resolution (SR) performance compared to state-of-the-art (SOTA) self-supervised SR methods for different upsampling scales. By combining a patient-agnostic offline and a patient-specific online phase, we achieve a substantial speed-up of up to ten times for patient-specific reconstruction while achieving similar or better SR quality. Code is available at https://github.com/MajaSchle/tripleSR.
Related papers
- Self-Supervised Weighted Image Guided Quantitative MRI Super-Resolution [0.4757311250629737]
High-resolution (HR) quantitative MRI (qMRI) relaxometry provides objective tissue characterization but remains clinically underutilized due to lengthy acquisition times.<n>We propose a physics-informed, self-supervised framework for qMRI super-resolution that uses routinely acquired HR weighted MRI (wMRI) scans as guidance.
arXiv Detail & Related papers (2025-12-19T14:15:31Z) - Recovering Diagnostic Value: Super-Resolution-Aided Echocardiographic Classification in Resource-Constrained Imaging [0.3937354192623676]
Super-resolution (SR) techniques have shown promise in enhancing magnetic resonance imaging (MRI) and computed tomography (CT) scans.<n>In this work, we investigate the potential of deep learning-based SR to improve classification accuracy on low-quality 2D echocardiograms.<n>We apply two widely used SR models-Super-Resolution Generative Adversarial Network (SRGAN) and Super-Resolution Residual Network (SRResNet)- to enhance poor-quality images and observe significant gains in performance metric.
arXiv Detail & Related papers (2025-07-30T18:45:31Z) - A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
We propose a unified MRI reconstruction model robust to various measurement undersampling patterns and image resolutions.<n>Our model improves SSIM by 11% and PSNR by 4 dB over a state-of-the-art CNN (End-to-End VarNet) with 600$times$ faster inference than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - TV-based Deep 3D Self Super-Resolution for fMRI [39.47953213077105]
We introduce a novel self-supervised DL SR model that combines a DL network with an analytical approach and Total Variation (TV) regularization.<n>Our method eliminates the need for external GT images, achieving competitive performance compared to supervised DL techniques and preserving the functional maps.
arXiv Detail & Related papers (2024-10-05T09:35:15Z) - Dual Arbitrary Scale Super-Resolution for Multi-Contrast MRI [23.50915512118989]
Multi-contrast Super-Resolution (SR) reconstruction is promising to yield SR images with higher quality.
radiologists are accustomed to zooming the MR images at arbitrary scales rather than using a fixed scale.
We propose an implicit neural representations based dual-arbitrary multi-contrast MRI super-resolution method, called Dual-ArbNet.
arXiv Detail & Related papers (2023-07-05T14:43:26Z) - Self-supervised arbitrary scale super-resolution framework for
anisotropic MRI [14.05196542298934]
We propose an efficient self-supervised arbitrary-scale super-resolution (SR) framework to reconstruct isotropic magnetic resonance (MR) images from anisotropic MRI inputs.
The proposed framework builds a training dataset using in-the-wild anisotropic MR volumes with arbitrary image resolution.
We perform experiments on a simulated public adult brain dataset and a real collected 7T brain dataset.
arXiv Detail & Related papers (2023-05-02T12:27:25Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - Perception Consistency Ultrasound Image Super-resolution via
Self-supervised CycleGAN [63.49373689654419]
We propose a new perception consistency ultrasound image super-resolution (SR) method based on self-supervision and cycle generative adversarial network (CycleGAN)
We first generate the HR fathers and the LR sons of the test ultrasound LR image through image enhancement.
We then make full use of the cycle loss of LR-SR-LR and HR-LR-SR and the adversarial characteristics of the discriminator to promote the generator to produce better perceptually consistent SR results.
arXiv Detail & Related papers (2020-12-28T08:24:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.