Plug-And-Play Learned Gaussian-mixture Approximate Message Passing
- URL: http://arxiv.org/abs/2011.09388v1
- Date: Wed, 18 Nov 2020 16:40:45 GMT
- Title: Plug-And-Play Learned Gaussian-mixture Approximate Message Passing
- Authors: Osman Musa, Peter Jung and Giuseppe Caire
- Abstract summary: We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
- Score: 71.74028918819046
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep unfolding showed to be a very successful approach for accelerating and
tuning classical signal processing algorithms. In this paper, we propose
learned Gaussian-mixture AMP (L-GM-AMP) - a plug-and-play compressed sensing
(CS) recovery algorithm suitable for any i.i.d. source prior. Our algorithm
builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by
adopting a universal denoising function within the algorithm. The robust and
flexible denoiser is a byproduct of modelling source prior with a
Gaussian-mixture (GM), which can well approximate continuous, discrete, as well
as mixture distributions. Its parameters are learned using standard
backpropagation algorithm. To demonstrate robustness of the proposed algorithm,
we conduct Monte-Carlo (MC) simulations for both mixture and discrete
distributions. Numerical evaluation shows that the L-GM-AMP algorithm achieves
state-of-the-art performance without any knowledge of the source prior.
Related papers
- Unrolled denoising networks provably learn optimal Bayesian inference [54.79172096306631]
We prove the first rigorous learning guarantees for neural networks based on unrolling approximate message passing (AMP)
For compressed sensing, we prove that when trained on data drawn from a product prior, the layers of the network converge to the same denoisers used in Bayes AMP.
arXiv Detail & Related papers (2024-09-19T17:56:16Z) - Provably Efficient Information-Directed Sampling Algorithms for Multi-Agent Reinforcement Learning [50.92957910121088]
This work designs and analyzes a novel set of algorithms for multi-agent reinforcement learning (MARL) based on the principle of information-directed sampling (IDS)
For episodic two-player zero-sum MGs, we present three sample-efficient algorithms for learning Nash equilibrium.
We extend Reg-MAIDS to multi-player general-sum MGs and prove that it can learn either the Nash equilibrium or coarse correlated equilibrium in a sample efficient manner.
arXiv Detail & Related papers (2024-04-30T06:48:56Z) - An Efficient 1 Iteration Learning Algorithm for Gaussian Mixture Model
And Gaussian Mixture Embedding For Neural Network [2.261786383673667]
The new algorithm brings more robustness and simplicity than classic Expectation Maximization (EM) algorithm.
It also improves the accuracy and only take 1 iteration for learning.
arXiv Detail & Related papers (2023-08-18T10:17:59Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Langevin Monte Carlo for Contextual Bandits [72.00524614312002]
Langevin Monte Carlo Thompson Sampling (LMC-TS) is proposed to directly sample from the posterior distribution in contextual bandits.
We prove that the proposed algorithm achieves the same sublinear regret bound as the best Thompson sampling algorithms for a special case of contextual bandits.
arXiv Detail & Related papers (2022-06-22T17:58:23Z) - Denoising Generalized Expectation-Consistent Approximation for MRI Image
Recovery [19.497777961872448]
In inverse problems, plug-and-play (DNN) methods have been developed that replace the step in a convex optimization with a call to an application-specific denoiser, often implemented using a deep neural network (DNN)
Although such methods have been successful, they can be improved. For example, denoisers are usually designed/trained to remove white noise, but the neural denoiser input error is far from white or Gaussian.
In this paper, we propose an algorithm that offers predictable error statistics each iteration, as well as a new image denoiser that leverages those statistics.
arXiv Detail & Related papers (2022-06-09T00:58:44Z) - Robust Quantum Control using Hybrid Pulse Engineering [0.0]
gradient-based optimization algorithms are limited by their sensitivity to the initial guess.
Our numerical analysis confirms its superior convergence rate.
We describe a general method to construct noise-resilient quantum controls by incorporating noisy fields.
arXiv Detail & Related papers (2021-12-02T14:29:42Z) - Efficient Algorithms for Estimating the Parameters of Mixed Linear
Regression Models [10.164623585528805]
We study the maximum likelihood estimation of the parameters of MLR model when the additive noise has non-Gaussian distribution.
To overcome this issue, we propose a new algorithm based on combining the alternating direction method of multipliers (ADMM) with EM algorithm idea.
arXiv Detail & Related papers (2021-05-12T20:29:03Z) - Sampling in Combinatorial Spaces with SurVAE Flow Augmented MCMC [83.48593305367523]
Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.
We introduce a new approach based on augmenting Monte Carlo methods with SurVAE Flows to sample from discrete distributions.
We demonstrate the efficacy of our algorithm on a range of examples from statistics, computational physics and machine learning, and observe improvements compared to alternative algorithms.
arXiv Detail & Related papers (2021-02-04T02:21:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.