Learning Linear Models Using Distributed Iterative Hessian Sketching
- URL: http://arxiv.org/abs/2112.04101v1
- Date: Wed, 8 Dec 2021 04:07:23 GMT
- Title: Learning Linear Models Using Distributed Iterative Hessian Sketching
- Authors: Han Wang and James Anderson
- Abstract summary: We consider the problem of learning the Markov parameters of a linear system from observed data.
We show that a randomized and distributed Newton algorithm based on Hessian-sketching can produce $epsilon$-optimal solutions.
- Score: 4.567810220723372
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This work considers the problem of learning the Markov parameters of a linear
system from observed data. Recent non-asymptotic system identification results
have characterized the sample complexity of this problem in the single and
multi-rollout setting. In both instances, the number of samples required in
order to obtain acceptable estimates can produce optimization problems with an
intractably large number of decision variables for a second-order algorithm. We
show that a randomized and distributed Newton algorithm based on
Hessian-sketching can produce $\epsilon$-optimal solutions and converges
geometrically. Moreover, the algorithm is trivially parallelizable. Our results
hold for a variety of sketching matrices and we illustrate the theory with
numerical examples.
Related papers
- Sketch-and-Solve: Optimized Overdetermined Least-Squares Using Randomized Numerical Linear Algebra [0.0]
This paper focuses on applying sketch-and-solve algorithms to efficiently solve the overdetermined least squares problem.
We introduce the Sketch-and-Apply (SAA-SAS) algorithm, which leverages randomized numerical linear algebra techniques to compute approximate solutions efficiently.
Our results highlight the potential of sketch-and-solve techniques in efficiently handling large-scale numerical linear algebra problems.
arXiv Detail & Related papers (2024-09-22T04:29:51Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Sharp Analysis of Sketch-and-Project Methods via a Connection to
Randomized Singular Value Decomposition [14.453949553412821]
We develop a theoretical framework for obtaining sharp guarantees on the convergence rate of sketch-and-project methods.
We show that the convergence rate improves at least linearly with the sketch size, and even faster when the data matrix exhibits certain spectral decays.
Our experiments support the theory and demonstrate that even extremely sparse sketches exhibit the convergence properties predicted by our framework.
arXiv Detail & Related papers (2022-08-20T03:11:13Z) - Fast Projected Newton-like Method for Precision Matrix Estimation under
Total Positivity [15.023842222803058]
Current algorithms are designed using the block coordinate descent method or the proximal point algorithm.
We propose a novel algorithm based on the two-metric projection method, incorporating a carefully designed search direction and variable partitioning scheme.
Experimental results on synthetic and real-world datasets demonstrate that our proposed algorithm provides a significant improvement in computational efficiency compared to the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-03T14:39:10Z) - Numerical Solution of Stiff Ordinary Differential Equations with Random
Projection Neural Networks [0.0]
We propose a numerical scheme based on Random Projection Neural Networks (RPNN) for the solution of Ordinary Differential Equations (ODEs)
We show that our proposed scheme yields good numerical approximation accuracy without being affected by the stiffness, thus outperforming in same cases the textttode45 and textttode15s functions.
arXiv Detail & Related papers (2021-08-03T15:49:17Z) - Analysis of Truncated Orthogonal Iteration for Sparse Eigenvector
Problems [78.95866278697777]
We propose two variants of the Truncated Orthogonal Iteration to compute multiple leading eigenvectors with sparsity constraints simultaneously.
We then apply our algorithms to solve the sparse principle component analysis problem for a wide range of test datasets.
arXiv Detail & Related papers (2021-03-24T23:11:32Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z) - Learning Gaussian Graphical Models via Multiplicative Weights [54.252053139374205]
We adapt an algorithm of Klivans and Meka based on the method of multiplicative weight updates.
The algorithm enjoys a sample complexity bound that is qualitatively similar to others in the literature.
It has a low runtime $O(mp2)$ in the case of $m$ samples and $p$ nodes, and can trivially be implemented in an online manner.
arXiv Detail & Related papers (2020-02-20T10:50:58Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.