A Trust-Guided Approach to MR Image Reconstruction with Side Information
- URL: http://arxiv.org/abs/2501.03021v2
- Date: Thu, 15 May 2025 04:15:14 GMT
- Title: A Trust-Guided Approach to MR Image Reconstruction with Side Information
- Authors: Arda Atalık, Sumit Chopra, Daniel K. Sodickson,
- Abstract summary: Trust- Guided Variational Network (TGVN) is an end-to-end deep learning framework that effectively integrates side information into MRI optimization problems.<n>TGVN achieves superior image quality while preserving subtle pathological features even at challenging acceleration levels.
- Score: 0.6144680854063939
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Reducing MRI scan times can improve patient care and lower healthcare costs. Many acceleration methods are designed to reconstruct diagnostic-quality images from sparse k-space data, via an ill-posed or ill-conditioned linear inverse problem (LIP). To address the resulting ambiguities, it is crucial to incorporate prior knowledge into the optimization problem, e.g., in the form of regularization. Another form of prior knowledge less commonly used in medical imaging is the readily available auxiliary data (a.k.a. side information) obtained from sources other than the current acquisition. In this paper, we present the Trust- Guided Variational Network (TGVN), an end-to-end deep learning framework that effectively and reliably integrates side information into LIPs. We demonstrate its effectiveness in multi-coil, multi-contrast MRI reconstruction, where incomplete or low-SNR measurements from one contrast are used as side information to reconstruct high-quality images of another contrast from heavily under-sampled data. TGVN is robust across different contrasts, anatomies, and field strengths. Compared to baselines utilizing side information, TGVN achieves superior image quality while preserving subtle pathological features even at challenging acceleration levels, drastically speeding up acquisition while minimizing hallucinations. Source code and dataset splits are available on github.com/sodicksonlab/TGVN.
Related papers
- On the Foundation Model for Cardiac MRI Reconstruction [6.284878525302227]
We propose a foundation model that uses adaptive unrolling, channel-shifting, and Pattern and Contrast-Prompt-UNet to tackle the problem.
The PCP-UNet is equipped with an image contrast and sampling pattern prompt.
arXiv Detail & Related papers (2024-11-15T18:15:56Z) - A Plug-and-Play Method for Guided Multi-contrast MRI Reconstruction based on Content/Style Modeling [1.1622133377827824]
Since multiple MRI contrasts contain redundant information, one contrast can be used as a prior for guiding the reconstruction of an undersampled subsequent contrast.
We propose a modular two-generative approach for guided reconstruction addressing this issue.
arXiv Detail & Related papers (2024-09-20T13:08:51Z) - CMRxRecon: An open cardiac MRI dataset for the competition of
accelerated image reconstruction [62.61209705638161]
There has been growing interest in deep learning-based CMR imaging algorithms.
Deep learning methods require large training datasets.
This dataset includes multi-contrast, multi-view, multi-slice and multi-coil CMR imaging data from 300 subjects.
arXiv Detail & Related papers (2023-09-19T15:14:42Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Attention Hybrid Variational Net for Accelerated MRI Reconstruction [7.046523233290946]
The application of compressed sensing (CS)-enabled data reconstruction for accelerating magnetic resonance imaging (MRI) remains a challenging problem.
This is due to the fact that the information lost in k-space from the acceleration mask makes it difficult to reconstruct an image similar to the quality of a fully sampled image.
We propose a deep learning-based attention hybrid variational network that performs learning in both the k-space and image domain.
arXiv Detail & Related papers (2023-06-21T16:19:07Z) - Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction [53.93674177236367]
Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging.
Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image.
This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses.
We introduce a novel geometry-aware encoder-decoder framework to solve this problem.
arXiv Detail & Related papers (2023-03-26T14:38:42Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - ClamNet: Using contrastive learning with variable depth Unets for
medical image segmentation [0.0]
Unets have become the standard method for semantic segmentation of medical images, along with fully convolutional networks (FCN)
Unet++ was introduced as a variant of Unet, in order to solve some of the problems facing Unet and FCNs.
We use contrastive learning to train Unet++ for semantic segmentation of medical images using medical images from various sources.
arXiv Detail & Related papers (2022-06-10T16:55:45Z) - FedMed-ATL: Misaligned Unpaired Brain Image Synthesis via Affine
Transform Loss [58.58979566599889]
We propose a novel self-supervised learning (FedMed) for brain image synthesis.
An affine transform loss (ATL) was formulated to make use of severely distorted images without violating privacy legislation.
The proposed method demonstrates advanced performance in both the quality of synthesized results under a severely misaligned and unpaired data setting.
arXiv Detail & Related papers (2022-01-29T13:45:39Z) - Multimodal-Boost: Multimodal Medical Image Super-Resolution using
Multi-Attention Network with Wavelet Transform [5.416279158834623]
Loss of corresponding image resolution degrades the overall performance of medical image diagnosis.
Deep learning based single image super resolution (SISR) algorithms has revolutionized the overall diagnosis framework.
This work proposes generative adversarial network (GAN) with deep multi-attention modules to learn high-frequency information from low-frequency data.
arXiv Detail & Related papers (2021-10-22T10:13:46Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Multi-Modal MRI Reconstruction with Spatial Alignment Network [51.74078260367654]
In clinical practice, magnetic resonance imaging (MRI) with multiple contrasts is usually acquired in a single study.
Recent researches demonstrate that, considering the redundancy between different contrasts or modalities, a target MRI modality under-sampled in the k-space can be better reconstructed with the helps from a fully-sampled sequence.
In this paper, we integrate the spatial alignment network with reconstruction, to improve the quality of the reconstructed target modality.
arXiv Detail & Related papers (2021-08-12T08:46:35Z) - Positional Contrastive Learning for Volumetric Medical Image
Segmentation [13.086140606803408]
We propose a novel positional contrastive learning framework to generate contrastive data pairs.
The proposed PCL method can substantially improve the segmentation performance compared to existing methods in both semi-supervised setting and transfer learning setting.
arXiv Detail & Related papers (2021-06-16T22:15:28Z) - Generative Adversarial Networks (GAN) Powered Fast Magnetic Resonance
Imaging -- Mini Review, Comparison and Perspectives [5.3148259096171175]
One drawback of MRI is its comparatively slow scanning and reconstruction compared to other image modalities.
Deep Neural Networks (DNNs) have been used in sparse MRI reconstruction models to recreate relatively high-quality images.
Generative Adversarial Networks (GAN) based methods are proposed to solve fast MRI with enhanced image perceptual quality.
arXiv Detail & Related papers (2021-05-04T23:59:00Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - Multi-institutional Collaborations for Improving Deep Learning-based
Magnetic Resonance Image Reconstruction Using Federated Learning [62.17532253489087]
Deep learning methods have been shown to produce superior performance on MR image reconstruction.
These methods require large amounts of data which is difficult to collect and share due to the high cost of acquisition and medical data privacy regulations.
We propose a federated learning (FL) based solution in which we take advantage of the MR data available at different institutions while preserving patients' privacy.
arXiv Detail & Related papers (2021-03-03T03:04:40Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Progressively Volumetrized Deep Generative Models for Data-Efficient
Contextual Learning of MR Image Recovery [0.0]
We introduce a novel progressive volumetrization strategy for generative models (ProvoGAN)
ProvoGAN serially decomposes complex volumetric image recovery tasks into successive cross-sectional mappings task-optimally ordered across individual rectilinear dimensions.
Comprehensive demonstrations on mainstream MRI reconstruction and synthesis tasks show that ProvoGAN yields superior performance to state-of-the-art volumetric and cross-sectional models.
arXiv Detail & Related papers (2020-11-27T18:55:56Z) - Deep Residual Dense U-Net for Resolution Enhancement in Accelerated MRI
Acquisition [19.422926534305837]
We propose a deep-learning approach, aiming at reconstructing high-quality images from accelerated MRI acquisition.
Specifically, we use Convolutional Neural Network (CNN) to learn the differences between the aliased images and the original images.
Considering the peculiarity of the down-sampled k-space data, we introduce a new term to the loss function in learning, which effectively employs the given k-space data.
arXiv Detail & Related papers (2020-01-13T19:01:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.