An Adaptive, Disentangled Representation for Multidimensional MRI Reconstruction
- URL: http://arxiv.org/abs/2512.24674v1
- Date: Wed, 31 Dec 2025 07:02:21 GMT
- Title: An Adaptive, Disentangled Representation for Multidimensional MRI Reconstruction
- Authors: Ruiyang Zhao, Fan Lam,
- Abstract summary: We present a new approach for representing and reconstructing multidimensional magnetic resonance imaging (MRI) data.<n>Our method builds on a novel, learned feature-based image representation that disentangles different types of features, such as geometry and contrast, into distinct low-dimensional latent spaces.<n>New reconstruction formulations and algorithms were developed to integrate the learned representation with a zero-shot selfsupervised learning adaptation and subspace modeling.
- Score: 3.3803617991341053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new approach for representing and reconstructing multidimensional magnetic resonance imaging (MRI) data. Our method builds on a novel, learned feature-based image representation that disentangles different types of features, such as geometry and contrast, into distinct low-dimensional latent spaces, enabling better exploitation of feature correlations in multidimensional images and incorporation of pre-learned priors specific to different feature types for reconstruction. More specifically, the disentanglement was achieved via an encoderdecoder network and image transfer training using large public data, enhanced by a style-based decoder design. A latent diffusion model was introduced to impose stronger constraints on distinct feature spaces. New reconstruction formulations and algorithms were developed to integrate the learned representation with a zero-shot selfsupervised learning adaptation and subspace modeling. The proposed method has been evaluated on accelerated T1 and T2 parameter mapping, achieving improved performance over state-of-the-art reconstruction methods, without task-specific supervised training or fine-tuning. This work offers a new strategy for learning-based multidimensional image reconstruction where only limited data are available for problem-specific or task-specific training.
Related papers
- GloTok: Global Perspective Tokenizer for Image Reconstruction and Generation [51.95701097588426]
We introduce a Global Perspective Tokenizer (GloTok) to model a more uniform semantic distribution of tokenized features.<n>A residual learning module is proposed to recover the fine-grained details to minimize the reconstruction error caused by quantization.<n>Experiments on the standard ImageNet-1k benchmark clearly show that our proposed method achieves state-of-the-art reconstruction performance and generation quality.
arXiv Detail & Related papers (2025-11-18T06:40:26Z) - Space-Variant Total Variation boosted by learning techniques in few-view tomographic imaging [0.0]
This paper focuses on the development of a space-variant regularization model for solving an under-determined linear inverse problem.
The primary objective of the proposed model is to achieve a good balance between denoising and the preservation of fine details and edges.
A convolutional neural network is designed, to approximate both the ground truth image and its gradient using an elastic loss function in its training.
arXiv Detail & Related papers (2024-04-25T08:58:41Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - High-Dimensional MR Reconstruction Integrating Subspace and Adaptive
Generative Models [21.719520686704474]
We present a novel method that integrates subspace modeling with an adaptive generative image prior for high-dimensional MR image reconstruction.
We evaluated the utility of the proposed method in two high-dimensional imaging applications: accelerated MR parameter mapping and high-resolution MRSI.
arXiv Detail & Related papers (2023-06-14T16:43:14Z) - Image Compressed Sensing with Multi-scale Dilated Convolutional Neural
Network [2.719222831651969]
This paper proposes a novel framework named Multi-scale Dilated Convolution Neural Network (MsDCNN) for CS measurement and reconstruction.
During the measurement period, we directly obtain all measurements from a trained measurement network, which employs fully convolutional structures.
During the reconstruction period, we propose the Multi-scale Feature Extraction (MFE) architecture to imitate the human visual system.
arXiv Detail & Related papers (2022-09-28T01:11:56Z) - Deep Unfolding of the DBFB Algorithm with Application to ROI CT Imaging
with Limited Angular Density [15.143939192429018]
This paper presents a new method for reconstructing regions of interest (ROI) from a limited number of computed (CT) measurements.
Deep methods are fast, and they can reach high reconstruction quality by leveraging information from datasets.
We introduce an unfolding neural network called UDBFB designed for ROI reconstruction from limited data.
arXiv Detail & Related papers (2022-09-27T09:10:57Z) - Rank-Enhanced Low-Dimensional Convolution Set for Hyperspectral Image
Denoising [50.039949798156826]
This paper tackles the challenging problem of hyperspectral (HS) image denoising.
We propose rank-enhanced low-dimensional convolution set (Re-ConvSet)
We then incorporate Re-ConvSet into the widely-used U-Net architecture to construct an HS image denoising method.
arXiv Detail & Related papers (2022-07-09T13:35:12Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - LEARN++: Recurrent Dual-Domain Reconstruction Network for Compressed
Sensing CT [17.168584459606272]
The LEARN++ model integrates two parallel and interactiveworks to perform image restoration and sinogram inpainting operations on both the image and projection domains simultaneously.
Results show that the proposed LEARN++ model achieves competitive qualitative and quantitative results compared to several state-of-the-art methods in terms of both artifact reduction and detail preservation.
arXiv Detail & Related papers (2020-12-13T07:00:50Z) - NAS-DIP: Learning Deep Image Prior with Neural Architecture Search [65.79109790446257]
Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior.
We propose to search for neural architectures that capture stronger image priors.
We search for an improved network by leveraging an existing neural architecture search algorithm.
arXiv Detail & Related papers (2020-08-26T17:59:36Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.