Model Adaptation for Image Reconstruction using Generalized Stein's
Unbiased Risk Estimator
- URL: http://arxiv.org/abs/2102.00047v1
- Date: Fri, 29 Jan 2021 20:16:45 GMT
- Title: Model Adaptation for Image Reconstruction using Generalized Stein's
Unbiased Risk Estimator
- Authors: Hemant Kumar Aggarwal, Mathews Jacob
- Abstract summary: We introduce a Generalized Stein's Unbiased Risk Estimate (GSURE) loss metric to adapt the network to the measured k-space data.
Unlike current methods that rely on the mean square error in kspace, the proposed metric accounts for noise in the measurements.
- Score: 34.08815401541628
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning image reconstruction algorithms often suffer from model
mismatches when the acquisition scheme differs significantly from the forward
model used during training. We introduce a Generalized Stein's Unbiased Risk
Estimate (GSURE) loss metric to adapt the network to the measured k-space data
and minimize model misfit impact. Unlike current methods that rely on the mean
square error in kspace, the proposed metric accounts for noise in the
measurements. This makes the approach less vulnerable to overfitting, thus
offering improved reconstruction quality compared to schemes that rely on
mean-square error. This approach may be useful to rapidly adapt pre-trained
models to new acquisition settings (e.g., multi-site) and different contrasts
than training data
Related papers
- Learning Augmentation Policies from A Model Zoo for Time Series Forecasting [58.66211334969299]
We introduce AutoTSAug, a learnable data augmentation method based on reinforcement learning.
By augmenting the marginal samples with a learnable policy, AutoTSAug substantially improves forecasting performance.
arXiv Detail & Related papers (2024-09-10T07:34:19Z) - Few-Shot Load Forecasting Under Data Scarcity in Smart Grids: A Meta-Learning Approach [0.18641315013048293]
This paper proposes adapting an established model-agnostic meta-learning algorithm for short-term load forecasting.
The proposed method can rapidly adapt and generalize within any unknown load time series of arbitrary length.
The proposed model is evaluated using a dataset of historical load consumption data from real-world consumers.
arXiv Detail & Related papers (2024-06-09T18:59:08Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - MOSAIC: Masked Optimisation with Selective Attention for Image
Reconstruction [0.5541644538483947]
We propose a novel compressive sensing framework to reconstruct images given any random selection of measurements.
MOSAIC incorporates an embedding technique to efficiently apply attention mechanisms on an encoded sequence of measurements.
A range of experiments validate our proposed architecture as a promising alternative for existing CS reconstruction methods.
arXiv Detail & Related papers (2023-06-01T17:05:02Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Deep Image Prior using Stein's Unbiased Risk Estimator: SURE-DIP [31.408877556706376]
Training data is scarce in many imaging applications, including ultra-high-resolution imaging.
Deep image prior (DIP) algorithm was introduced for single-shot image recovery, completely eliminating the need for training data.
We introduce a generalized Stein's unbiased risk estimate (GSURE) loss metric to minimize the overfitting.
arXiv Detail & Related papers (2021-11-21T20:11:56Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Autocalibration and Tweedie-dominance for Insurance Pricing with Machine
Learning [0.0]
It is shown that minimizing deviance involves a trade-off between the integral of weighted differences of lower partial moments and the bias measured on a specific scale.
This new method to correct for bias adds extra local GLM step to the analysis.
The convex order appears to be the natural tool to compare competing models.
arXiv Detail & Related papers (2021-03-05T12:40:30Z) - Reconstruction-Based Membership Inference Attacks are Easier on
Difficult Problems [36.13835940345486]
We show that models with higher dimensional input and output are more vulnerable to membership inference attacks.
We propose using a novel predictability score that can be computed for each sample, and its computation does not require a training set.
Our membership error, obtained by subtracting the predictability score from the reconstruction error, is shown to achieve high MIA accuracy on an extensive number of benchmarks.
arXiv Detail & Related papers (2021-02-15T18:57:22Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.