K-band: Self-supervised MRI Reconstruction via Stochastic Gradient Descent over K-space Subsets
- URL: http://arxiv.org/abs/2308.02958v3
- Date: Thu, 23 May 2024 12:27:41 GMT
- Title: K-band: Self-supervised MRI Reconstruction via Stochastic Gradient Descent over K-space Subsets
- Authors: Frederic Wang, Han Qi, Alfredo De Goyeneche, Reinhard Heckel, Michael Lustig, Efrat Shimron,
- Abstract summary: We introduce a novel mathematical framework, dubbed k-band, that enables training DL models using only partial, limited-resolution k-space data.
In each training iteration, rather than using the fully sampled k-space for computing gradients, we use only a small k-space portion.
Numerical experiments with raw MRI data indicate that k-band outperforms two other methods trained on limited-resolution data.
- Score: 16.785465381844435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although deep learning (DL) methods are powerful for solving inverse problems, their reliance on high-quality training data is a major hurdle. This is significant in high-dimensional (dynamic/volumetric) magnetic resonance imaging (MRI), where acquisition of high-resolution fully sampled k-space data is impractical. We introduce a novel mathematical framework, dubbed k-band, that enables training DL models using only partial, limited-resolution k-space data. Specifically, we introduce training with stochastic gradient descent (SGD) over k-space subsets. In each training iteration, rather than using the fully sampled k-space for computing gradients, we use only a small k-space portion. This concept is compatible with different sampling strategies; here we demonstrate the method for k-space "bands", which have limited resolution in one dimension and can hence be acquired rapidly. We prove analytically that our method stochastically approximates the gradients computed in a fully-supervised setup, when two simple conditions are met: (i) the limited-resolution axis is chosen randomly-uniformly for every new scan, hence k-space is fully covered across the entire training set, and (ii) the loss function is weighed with a mask, derived here analytically, which facilitates accurate reconstruction of high-resolution details. Numerical experiments with raw MRI data indicate that k-band outperforms two other methods trained on limited-resolution data and performs comparably to state-of-the-art (SoTA) methods trained on high-resolution data. k-band hence obtains SoTA performance, with the advantage of training using only limited-resolution data. This work hence introduces a practical, easy-to-implement, self-supervised training framework, which involves fast acquisition and self-supervised reconstruction and offers theoretical guarantees.
Related papers
- Active Sampling of Interpolation Points to Identify Dominant Subspaces for Model Reduction [7.818201674097184]
We investigate model reduction for linear structured systems using dominant reachable and observable subspaces.
When the training set $-$ containing all possible points $-$ is large, these subspaces can be determined by solving many large-scale linear systems.
We propose an active sampling strategy to sample only a few points from the given training set, which can allow us to estimate those subspaces accurately.
arXiv Detail & Related papers (2024-09-05T19:59:14Z) - JSSL: Joint Supervised and Self-supervised Learning for MRI Reconstruction [7.018974360061121]
Joint Supervised and Self-supervised Learning (JSSL) is a novel training approach for deep learning-based MRI reconstruction algorithms.
JSSL operates by simultaneously training a model in a self-supervised learning setting, using subsampled data from the target dataset.
We demonstrate JSSL's efficacy using subsampled prostate or cardiac MRI data as the target datasets.
arXiv Detail & Related papers (2023-11-27T14:23:36Z) - Decomposed Diffusion Sampler for Accelerating Large-Scale Inverse
Problems [64.29491112653905]
We propose a novel and efficient diffusion sampling strategy that synergistically combines the diffusion sampling and Krylov subspace methods.
Specifically, we prove that if tangent space at a denoised sample by Tweedie's formula forms a Krylov subspace, then the CG with the denoised data ensures the data consistency update to remain in the tangent space.
Our proposed method achieves more than 80 times faster inference time than the previous state-of-the-art method.
arXiv Detail & Related papers (2023-03-10T07:42:49Z) - Angular upsampling in diffusion MRI using contextual HemiHex
sub-sampling in q-space [0.0]
It is important to incorporate relevant context for the data to ensure that maximum prior information is provided for the AI model to infer the posterior.
In this paper, we introduce HemiHex subsampling to suggestively address training data sampling on q-space geometry.
Our proposed approach is a geometrically optimized regression technique which infers the unknown q-space thus addressing the limitations in the earlier studies.
arXiv Detail & Related papers (2022-11-01T03:13:07Z) - Federated Representation Learning via Maximal Coding Rate Reduction [109.26332878050374]
We propose a methodology to learn low-dimensional representations from a dataset that is distributed among several clients.
Our proposed method, which we refer to as FLOW, utilizes MCR2 as the objective of choice, hence resulting in representations that are both between-class discriminative and within-class compressible.
arXiv Detail & Related papers (2022-10-01T15:43:51Z) - A Unifying Multi-sampling-ratio CS-MRI Framework With Two-grid-cycle
Correction and Geometric Prior Distillation [7.643154460109723]
We propose a unifying deep unfolding multi-sampling-ratio CS-MRI framework, by merging advantages of model-based and deep learning-based methods.
Inspired by multigrid algorithm, we first embed the CS-MRI-based optimization algorithm into correction-distillation scheme.
We employ a condition module to learn adaptively step-length and noise level from compressive sampling ratio in every stage.
arXiv Detail & Related papers (2022-05-14T13:36:27Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - A Fast Parallel Tensor Decomposition with Optimal Stochastic Gradient
Descent: an Application in Structural Damage Identification [1.536989504296526]
We propose a novel algorithm, FP-CPD, to parallelize the CANDECOMP/PARAFAC (CP) decomposition of a tensor $mathcalX in mathbbR I_1 times dots times I_N $.
arXiv Detail & Related papers (2021-11-04T05:17:07Z) - SreaMRAK a Streaming Multi-Resolution Adaptive Kernel Algorithm [60.61943386819384]
Existing implementations of KRR require that all the data is stored in the main memory.
We propose StreaMRAK - a streaming version of KRR.
We present a showcase study on two synthetic problems and the prediction of the trajectory of a double pendulum.
arXiv Detail & Related papers (2021-08-23T21:03:09Z) - Attentional-Biased Stochastic Gradient Descent [74.49926199036481]
We present a provable method (named ABSGD) for addressing the data imbalance or label noise problem in deep learning.
Our method is a simple modification to momentum SGD where we assign an individual importance weight to each sample in the mini-batch.
ABSGD is flexible enough to combine with other robust losses without any additional cost.
arXiv Detail & Related papers (2020-12-13T03:41:52Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.