Improved Supervised Training of Physics-Guided Deep Learning Image
Reconstruction with Multi-Masking
- URL: http://arxiv.org/abs/2010.13868v1
- Date: Mon, 26 Oct 2020 19:39:32 GMT
- Title: Improved Supervised Training of Physics-Guided Deep Learning Image
Reconstruction with Multi-Masking
- Authors: Burhaneddin Yaman, Seyed Amir Hossein Hosseini, Steen Moeller and
Mehmet Ak\c{c}akaya
- Abstract summary: Proposed multi-mask supervised PG-DL enhances reconstruction performance compared to conventional supervised PG-DL approaches.
Results on knee MRI show that the proposed multi-mask supervised PG-DL enhances reconstruction performance compared to conventional supervised PG-DL approaches.
- Score: 3.441021278275805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physics-guided deep learning (PG-DL) via algorithm unrolling has received
significant interest for improved image reconstruction, including MRI
applications. These methods unroll an iterative optimization algorithm into a
series of regularizer and data consistency units. The unrolled networks are
typically trained end-to-end using a supervised approach. Current supervised
PG-DL approaches use all of the available sub-sampled measurements in their
data consistency units. Thus, the network learns to fit the rest of the
measurements. In this study, we propose to improve the performance and
robustness of supervised training by utilizing randomness by retrospectively
selecting only a subset of all the available measurements for data consistency
units. The process is repeated multiple times using different random masks
during training for further enhancement. Results on knee MRI show that the
proposed multi-mask supervised PG-DL enhances reconstruction performance
compared to conventional supervised PG-DL approaches.
Related papers
- Multi-frequency Electrical Impedance Tomography Reconstruction with Multi-Branch Attention Image Prior [12.844329463661857]
Multi-frequency Electrical Impedance Tomography (mfEIT) is a promising biomedical imaging technique.
Current state-of-the-art (SOTA) algorithms, which rely on supervised learning and Multiple Measurement Vectors (MMV), require extensive training data.
We propose a novel unsupervised learning approach based on Multi-Branch Attention Image Prior (MAIP) for mfEIT reconstruction.
arXiv Detail & Related papers (2024-09-17T00:06:03Z) - Denoising Pre-Training and Customized Prompt Learning for Efficient Multi-Behavior Sequential Recommendation [69.60321475454843]
We propose DPCPL, the first pre-training and prompt-tuning paradigm tailored for Multi-Behavior Sequential Recommendation.
In the pre-training stage, we propose a novel Efficient Behavior Miner (EBM) to filter out the noise at multiple time scales.
Subsequently, we propose to tune the pre-trained model in a highly efficient manner with the proposed Customized Prompt Learning (CPL) module.
arXiv Detail & Related papers (2024-08-21T06:48:38Z) - BADM: Batch ADMM for Deep Learning [35.39258144247444]
gradient descent-based algorithms are widely used for training deep neural networks but often suffer from slow convergence.
We leverage the framework of the alternating direction method of multipliers (ADMM) to develop a novel data-driven algorithm, called batch ADMM (BADM)
We evaluate the performance of BADM across various deep learning tasks, including graph modelling, computer vision, image generation, and natural language processing.
arXiv Detail & Related papers (2024-06-30T20:47:15Z) - Self-STORM: Deep Unrolled Self-Supervised Learning for Super-Resolution Microscopy [55.2480439325792]
We introduce deep unrolled self-supervised learning, which alleviates the need for such data by training a sequence-specific, model-based autoencoder.
Our proposed method exceeds the performance of its supervised counterparts.
arXiv Detail & Related papers (2024-03-25T17:40:32Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Denoising Diffusion Restoration Models [110.1244240726802]
Denoising Diffusion Restoration Models (DDRM) is an efficient, unsupervised posterior sampling method.
We demonstrate DDRM's versatility on several image datasets for super-resolution, deblurring, inpainting, and colorization.
arXiv Detail & Related papers (2022-01-27T20:19:07Z) - An Optimal Control Framework for Joint-channel Parallel MRI
Reconstruction without Coil Sensitivities [5.536263246814308]
We develop a novel calibration-free fast parallel MRI (pMRI) reconstruction method incorporate with discrete-time optimal control framework.
We propose to recover both magnitude and phase information by taking advantage of structured multiplayer convolutional networks in image and Fourier spaces.
arXiv Detail & Related papers (2021-09-20T06:42:42Z) - An Adaptive Framework for Learning Unsupervised Depth Completion [59.17364202590475]
We present a method to infer a dense depth map from a color image and associated sparse depth measurements.
We show that regularization and co-visibility are related via the fitness of the model to data and can be unified into a single framework.
arXiv Detail & Related papers (2021-06-06T02:27:55Z) - Solving Sparse Linear Inverse Problems in Communication Systems: A Deep
Learning Approach With Adaptive Depth [51.40441097625201]
We propose an end-to-end trainable deep learning architecture for sparse signal recovery problems.
The proposed method learns how many layers to execute to emit an output, and the network depth is dynamically adjusted for each task in the inference phase.
arXiv Detail & Related papers (2020-10-29T06:32:53Z) - Gradient Monitored Reinforcement Learning [0.0]
We focus on the enhancement of training and evaluation performance in reinforcement learning algorithms.
We propose an approach to steer the learning in the weight parameters of a neural network based on the dynamic development and feedback from the training process itself.
arXiv Detail & Related papers (2020-05-25T13:45:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.