Geom-SPIDER-EM: Faster Variance Reduced Stochastic Expectation
Maximization for Nonconvex Finite-Sum Optimization
- URL: http://arxiv.org/abs/2011.12392v1
- Date: Tue, 24 Nov 2020 21:20:53 GMT
- Title: Geom-SPIDER-EM: Faster Variance Reduced Stochastic Expectation
Maximization for Nonconvex Finite-Sum Optimization
- Authors: Gersende Fort (IMT), Eric Moulines (X-DEP-MATHAPP), Hoi-To Wai
- Abstract summary: We propose an extension of the Path-Integrated Differential Estima to the Expectation Maximization (EM) algorithm.
We show it supports the same state art bounds as SPIDER-EM-IDER; and results provide for a rate for our findings.
- Score: 21.81837334970773
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Expectation Maximization (EM) algorithm is a key reference for inference
in latent variable models; unfortunately, its computational cost is prohibitive
in the large scale learning setting. In this paper, we propose an extension of
the Stochastic Path-Integrated Differential EstimatoR EM (SPIDER-EM) and derive
complexity bounds for this novel algorithm, designed to solve smooth nonconvex
finite-sum optimization problems. We show that it reaches the same state of the
art complexity bounds as SPIDER-EM; and provide conditions for a linear rate of
convergence. Numerical results support our findings.
Related papers
- Convergence of Expectation-Maximization Algorithm with Mixed-Integer Optimization [5.319361976450982]
This paper introduces a set of conditions that ensure the convergence of a specific class of EM algorithms.
Our results offer a new analysis technique for iterative algorithms that solve mixed-integer non-linear optimization problems.
arXiv Detail & Related papers (2024-01-31T11:42:46Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Faster Riemannian Newton-type Optimization by Subsampling and Cubic
Regularization [3.867143522757309]
This work is on constrained large-scale non-constrained optimization where the constraint set implies a manifold structure.
We propose a new second-order saddleian optimization algorithm, aiming at improving convergence and reducing computational cost.
arXiv Detail & Related papers (2023-02-22T00:37:44Z) - Stochastic Variable Metric Proximal Gradient with variance reduction for
non-convex composite optimization [0.0]
3P-SP-IDER is a novel algorithm designed to solve finite sum non-backward logistic equations.
We show that 3P-SP-IDER extends some preconditioned and some Incremental Maximization algorithms to the case forward operator can not be computed in closed form.
We also discuss the role of some design parameters of 3P-SP-IDER in some application to inference in a regression model with random effects.
arXiv Detail & Related papers (2023-01-02T12:49:48Z) - Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation
Constrained Optimization [88.0031283949404]
Many real-world problems have complicated non functional constraints and use a large number of data points.
Our proposed method outperforms an existing method with the previously best-known result.
arXiv Detail & Related papers (2022-12-19T14:48:54Z) - Adaptive Stochastic Optimisation of Nonconvex Composite Objectives [2.1700203922407493]
We propose and analyse a family of generalised composite mirror descent algorithms.
With adaptive step sizes, the proposed algorithms converge without requiring prior knowledge of the problem.
We exploit the low-dimensional structure of the decision sets for high-dimensional problems.
arXiv Detail & Related papers (2022-11-21T18:31:43Z) - Optimal Algorithms for Stochastic Complementary Composite Minimization [55.26935605535377]
Inspired by regularization techniques in statistics and machine learning, we study complementary composite minimization.
We provide novel excess risk bounds, both in expectation and with high probability.
Our algorithms are nearly optimal, which we prove via novel lower complexity bounds for this class of problems.
arXiv Detail & Related papers (2022-11-03T12:40:24Z) - SPIRAL: A superlinearly convergent incremental proximal algorithm for
nonconvex finite sum minimization [11.15617238382324]
SPIRAL is a convergent pRoximal ALgogomrithor for finite sum problems.
It has remarkable convergence under mild assumptions.
It is competitive to the state of the art.
arXiv Detail & Related papers (2022-07-17T14:58:06Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - Convergence of Meta-Learning with Task-Specific Adaptation over Partial
Parameters [152.03852111442114]
Although model-agnostic metalearning (MAML) is a very successful algorithm meta-learning practice, it can have high computational complexity.
Our paper shows that such complexity can significantly affect the overall convergence performance of ANIL.
arXiv Detail & Related papers (2020-06-16T19:57:48Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.