Efficient Algorithms for Estimating the Parameters of Mixed Linear
Regression Models
- URL: http://arxiv.org/abs/2105.05953v1
- Date: Wed, 12 May 2021 20:29:03 GMT
- Title: Efficient Algorithms for Estimating the Parameters of Mixed Linear
Regression Models
- Authors: Babak Barazandeh, Ali Ghafelebashi, Meisam Razaviyayn, Ram Sriharsha
- Abstract summary: We study the maximum likelihood estimation of the parameters of MLR model when the additive noise has non-Gaussian distribution.
To overcome this issue, we propose a new algorithm based on combining the alternating direction method of multipliers (ADMM) with EM algorithm idea.
- Score: 10.164623585528805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mixed linear regression (MLR) model is among the most exemplary statistical
tools for modeling non-linear distributions using a mixture of linear models.
When the additive noise in MLR model is Gaussian, Expectation-Maximization (EM)
algorithm is a widely-used algorithm for maximum likelihood estimation of MLR
parameters. However, when noise is non-Gaussian, the steps of EM algorithm may
not have closed-form update rules, which makes EM algorithm impractical. In
this work, we study the maximum likelihood estimation of the parameters of MLR
model when the additive noise has non-Gaussian distribution. In particular, we
consider the case that noise has Laplacian distribution and we first show that
unlike the the Gaussian case, the resulting sub-problems of EM algorithm in
this case does not have closed-form update rule, thus preventing us from using
EM in this case. To overcome this issue, we propose a new algorithm based on
combining the alternating direction method of multipliers (ADMM) with EM
algorithm idea. Our numerical experiments show that our method outperforms the
EM algorithm in statistical accuracy and computational time in non-Gaussian
noise case.
Related papers
- Information limits and Thouless-Anderson-Palmer equations for spiked matrix models with structured noise [19.496063739638924]
We consider a saturate problem of Bayesian inference for a structured spiked model.
We show how to predict the statistical limits using an efficient algorithm inspired by the theory of adaptive Thouless-Anderson-Palmer equations.
arXiv Detail & Related papers (2024-05-31T16:38:35Z) - Mixed Noise and Posterior Estimation with Conditional DeepGEM [1.1650821883155187]
We develop a novel algorithm for jointly estimating the posterior and the noise parameters in inverse problems.
We show that our model is able to incorporate information from many measurements, unlike previous approaches.
arXiv Detail & Related papers (2024-02-05T12:42:21Z) - Algorithme EM r\'egularis\'e [0.0]
This paper presents a regularized version of the EM algorithm that efficiently uses prior knowledge to cope with a small sample size.
Experiments on real data highlight the good performance of the proposed algorithm for clustering purposes.
arXiv Detail & Related papers (2023-07-04T23:19:25Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Langevin Monte Carlo for Contextual Bandits [72.00524614312002]
Langevin Monte Carlo Thompson Sampling (LMC-TS) is proposed to directly sample from the posterior distribution in contextual bandits.
We prove that the proposed algorithm achieves the same sublinear regret bound as the best Thompson sampling algorithms for a special case of contextual bandits.
arXiv Detail & Related papers (2022-06-22T17:58:23Z) - Beyond EM Algorithm on Over-specified Two-Component Location-Scale
Gaussian Mixtures [29.26015093627193]
We develop the Exponential Location Update (ELU) algorithm to efficiently explore the curvature of the negative log-likelihood functions.
We demonstrate that the ELU algorithm converges to the final statistical radius of the models after a logarithmic number of iterations.
arXiv Detail & Related papers (2022-05-23T06:49:55Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.