Matrix Completion-Informed Deep Unfolded Equilibrium Models for
Self-Supervised k-Space Interpolation in MRI
- URL: http://arxiv.org/abs/2309.13571v1
- Date: Sun, 24 Sep 2023 07:25:06 GMT
- Title: Matrix Completion-Informed Deep Unfolded Equilibrium Models for
Self-Supervised k-Space Interpolation in MRI
- Authors: Chen Luo, Huayu Wang, Taofeng Xie, Qiyu Jin, Guoqing Chen, Zhuo-Xu
Cui, Dong Liang
- Abstract summary: Regularization model-driven deep learning (DL) has gained significant attention due to its ability to leverage the potent representational capabilities of DL.
We propose a self-supervised DL approach for accelerated MRI that is theoretically guaranteed and does not rely on fully sampled labels.
- Score: 8.33626757808923
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, regularization model-driven deep learning (DL) has gained
significant attention due to its ability to leverage the potent
representational capabilities of DL while retaining the theoretical guarantees
of regularization models. However, most of these methods are tailored for
supervised learning scenarios that necessitate fully sampled labels, which can
pose challenges in practical MRI applications. To tackle this challenge, we
propose a self-supervised DL approach for accelerated MRI that is theoretically
guaranteed and does not rely on fully sampled labels. Specifically, we achieve
neural network structure regularization by exploiting the inherent structural
low-rankness of the $k$-space data. Simultaneously, we constrain the network
structure to resemble a nonexpansive mapping, ensuring the network's
convergence to a fixed point. Thanks to this well-defined network structure,
this fixed point can completely reconstruct the missing $k$-space data based on
matrix completion theory, even in situations where full-sampled labels are
unavailable. Experiments validate the effectiveness of our proposed method and
demonstrate its superiority over existing self-supervised approaches and
traditional regularization methods, achieving performance comparable to that of
supervised learning methods in certain scenarios.
Related papers
- LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networks [52.46420522934253]
We introduce LoRA-Ensemble, a parameter-efficient deep ensemble method for self-attention networks.
By employing a single pre-trained self-attention network with weights shared across all members, we train member-specific low-rank matrices for the attention projections.
Our method exhibits superior calibration compared to explicit ensembles and achieves similar or better accuracy across various prediction tasks and datasets.
arXiv Detail & Related papers (2024-05-23T11:10:32Z) - Self-STORM: Deep Unrolled Self-Supervised Learning for Super-Resolution Microscopy [55.2480439325792]
We introduce deep unrolled self-supervised learning, which alleviates the need for such data by training a sequence-specific, model-based autoencoder.
Our proposed method exceeds the performance of its supervised counterparts.
arXiv Detail & Related papers (2024-03-25T17:40:32Z) - A Bayesian Unification of Self-Supervised Clustering and Energy-Based
Models [11.007541337967027]
We perform a Bayesian analysis of state-of-the-art self-supervised learning objectives.
We show that our objective function allows to outperform existing self-supervised learning strategies.
We also demonstrate that GEDI can be integrated into a neuro-symbolic framework.
arXiv Detail & Related papers (2023-12-30T04:46:16Z) - JSSL: Joint Supervised and Self-supervised Learning for MRI Reconstruction [7.018974360061121]
Joint Supervised and Self-supervised Learning (JSSL) is a novel training approach for deep learning-based MRI reconstruction algorithms.
JSSL operates by simultaneously training a model in a self-supervised learning setting, using subsampled data from the target dataset.
We demonstrate JSSL's efficacy using subsampled prostate or cardiac MRI data as the target datasets.
arXiv Detail & Related papers (2023-11-27T14:23:36Z) - Convex Latent-Optimized Adversarial Regularizers for Imaging Inverse
Problems [8.33626757808923]
We introduce Convex Latent-d Adrial Regularizers (CLEAR), a novel and interpretable data-driven paradigm.
CLEAR represents a fusion of deep learning (DL) and variational regularization.
Our method consistently outperforms conventional data-driven techniques and traditional regularization approaches.
arXiv Detail & Related papers (2023-09-17T12:06:04Z) - Validation Diagnostics for SBI algorithms based on Normalizing Flows [55.41644538483948]
This work proposes easy to interpret validation diagnostics for multi-dimensional conditional (posterior) density estimators based on NF.
It also offers theoretical guarantees based on results of local consistency.
This work should help the design of better specified models or drive the development of novel SBI-algorithms.
arXiv Detail & Related papers (2022-11-17T15:48:06Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - Equilibrated Zeroth-Order Unrolled Deep Networks for Accelerated MRI [14.586911990418624]
Recently, model-driven deep learning unrolls a certain iterative algorithm of a regularization model into a cascade network.
In theory, there is not necessarily such a functional regularizer whose first-order information matches the replaced network module.
This paper propose to present a safeguarded methodology on network unrolling.
arXiv Detail & Related papers (2021-12-18T09:47:19Z) - Theoretical Analysis of Self-Training with Deep Networks on Unlabeled
Data [48.4779912667317]
Self-training algorithms have been very successful for learning with unlabeled data using neural networks.
This work provides a unified theoretical analysis of self-training with deep networks for semi-supervised learning, unsupervised domain adaptation, and unsupervised learning.
arXiv Detail & Related papers (2020-10-07T19:43:55Z) - Target-Embedding Autoencoders for Supervised Representation Learning [111.07204912245841]
This paper analyzes a framework for improving generalization in a purely supervised setting, where the target space is high-dimensional.
We motivate and formalize the general framework of target-embedding autoencoders (TEA) for supervised prediction, learning intermediate latent representations jointly optimized to be both predictable from features as well as predictive of targets.
arXiv Detail & Related papers (2020-01-23T02:37:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.