Blind hierarchical deconvolution
- URL: http://arxiv.org/abs/2007.11391v1
- Date: Wed, 22 Jul 2020 12:54:19 GMT
- Title: Blind hierarchical deconvolution
- Authors: Arttu Arjas, Lassi Roininen, Mikko J. Sillanp\"a\"a, Andreas Hauptmann
- Abstract summary: Deconvolution is a fundamental inverse problem in signal processing and prototypical the model for recovering a signal from its noisy measurement.
We propose a framework of blind hierarchical deconvolution that enables accurate reconstructions of functions with varying regularity and unknown kernel size.
- Score: 1.9817733658218057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deconvolution is a fundamental inverse problem in signal processing and the
prototypical model for recovering a signal from its noisy measurement.
Nevertheless, the majority of model-based inversion techniques require
knowledge on the convolution kernel to recover an accurate reconstruction and
additionally prior assumptions on the regularity of the signal are needed. To
overcome these limitations, we parametrise the convolution kernel and prior
length-scales, which are then jointly estimated in the inversion procedure. The
proposed framework of blind hierarchical deconvolution enables accurate
reconstructions of functions with varying regularity and unknown kernel size
and can be solved efficiently with an empirical Bayes two-step procedure, where
hyperparameters are first estimated by optimisation and other unknowns then by
an analytical formula.
Related papers
- Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance [52.093434664236014]
Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems.
Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation.
arXiv Detail & Related papers (2024-02-03T13:35:39Z) - Regularization, early-stopping and dreaming: a Hopfield-like setup to
address generalization and overfitting [0.0]
We look for optimal network parameters by applying a gradient descent over a regularized loss function.
Within this framework, the optimal neuron-interaction matrices correspond to Hebbian kernels revised by a reiterated unlearning protocol.
arXiv Detail & Related papers (2023-08-01T15:04:30Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Transformer Meets Boundary Value Inverse Problems [4.165221477234755]
Transformer-based deep direct sampling method is proposed for solving a class of boundary value inverse problem.
A real-time reconstruction is achieved by evaluating the learned inverse operator between carefully designed data and reconstructed images.
arXiv Detail & Related papers (2022-09-29T17:45:25Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - End-to-end reconstruction meets data-driven regularization for inverse
problems [2.800608984818919]
We propose an unsupervised approach for learning end-to-end reconstruction operators for ill-posed inverse problems.
The proposed method combines the classical variational framework with iterative unrolling.
We demonstrate with the example of X-ray computed tomography (CT) that our approach outperforms state-of-the-art unsupervised methods.
arXiv Detail & Related papers (2021-06-07T12:05:06Z) - Consistent and symmetry preserving data-driven interface reconstruction
for the level-set method [0.0]
We focus on interface reconstruction (IR) in the level-set method, i.e. the computation of the volume fraction and apertures.
The proposed approach improves accuracy for coarsely resolved interfaces and recovers the conventional IR for high resolutions.
We provide details of floating point symmetric implementation and computational efficiency.
arXiv Detail & Related papers (2021-04-23T13:21:10Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Sparse recovery by reduced variance stochastic approximation [5.672132510411465]
We discuss application of iterative quadratic optimization routines to the problem of sparse signal recovery from noisy observation.
We show how one can straightforwardly enhance reliability of the corresponding solution by using Median-of-Means like techniques.
arXiv Detail & Related papers (2020-06-11T12:31:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.