Matrix Diagonalization as a Board Game: Teaching an Eigensolver the
Fastest Path to Solution
- URL: http://arxiv.org/abs/2306.10075v2
- Date: Wed, 21 Jun 2023 05:05:42 GMT
- Title: Matrix Diagonalization as a Board Game: Teaching an Eigensolver the
Fastest Path to Solution
- Authors: Phil Romero, Manish Bhattarai, Christian F. A. Negre, Anders M. N.
Niklasson, Adetokunbo Adedoyin
- Abstract summary: Matrix diagonalization is at the cornerstone of numerous fields of scientific computing.
We show how reinforcement learning, using the AlphaZero framework, can accelerate Jacobi matrix diagonalizations.
Our findings highlight the opportunity to use machine learning as a promising tool to improve the performance of numerical linear algebra.
- Score: 2.239917051803692
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Matrix diagonalization is at the cornerstone of numerous fields of scientific
computing. Diagonalizing a matrix to solve an eigenvalue problem requires a
sequential path of iterations that eventually reaches a sufficiently converged
and accurate solution for all the eigenvalues and eigenvectors. This typically
translates into a high computational cost. Here we demonstrate how
reinforcement learning, using the AlphaZero framework, can accelerate Jacobi
matrix diagonalizations by viewing the selection of the fastest path to
solution as a board game. To demonstrate the viability of our approach we apply
the Jacobi diagonalization algorithm to symmetric Hamiltonian matrices that
appear in quantum chemistry calculations. We find that a significant
acceleration can often be achieved. Our findings highlight the opportunity to
use machine learning as a promising tool to improve the performance of
numerical linear algebra.
Related papers
- A quantum algorithm for solving eigenproblem of the Laplacian matrix of
a fully connected weighted graph [4.045204834863644]
We propose an efficient quantum algorithm to solve the eigenproblem of the Laplacian matrix of a fully connected weighted graph.
Specifically, we adopt the optimal Hamiltonian simulation technique based on the block-encoding framework.
We also show that our algorithm can be extended to solve the eigenproblem of symmetric (non-symmetric) normalized Laplacian matrix.
arXiv Detail & Related papers (2022-03-28T02:24:08Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Quantum algorithms for matrix operations and linear systems of equations [65.62256987706128]
We propose quantum algorithms for matrix operations using the "Sender-Receiver" model.
These quantum protocols can be used as subroutines in other quantum schemes.
arXiv Detail & Related papers (2022-02-10T08:12:20Z) - Fast Differentiable Matrix Square Root and Inverse Square Root [65.67315418971688]
We propose two more efficient variants to compute the differentiable matrix square root and the inverse square root.
For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP), and the other method is to use Matrix Pad'e Approximants (MPA)
A series of numerical tests show that both methods yield considerable speed-up compared with the SVD or the NS iteration.
arXiv Detail & Related papers (2022-01-29T10:00:35Z) - Block-encoding dense and full-rank kernels using hierarchical matrices:
applications in quantum numerical linear algebra [6.338178373376447]
We propose a block-encoding scheme of the hierarchical matrix structure on a quantum computer.
Our method can improve the runtime of solving quantum linear systems of dimension $N$ to $O(kappa operatornamepolylog(fracNvarepsilon))$.
arXiv Detail & Related papers (2022-01-27T05:24:02Z) - Fast Differentiable Matrix Square Root [65.67315418971688]
We propose two more efficient variants to compute the differentiable matrix square root.
For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP)
The other method is to use Matrix Pad'e Approximants (MPA)
arXiv Detail & Related papers (2022-01-21T12:18:06Z) - Learning in High-Dimensional Feature Spaces Using ANOVA-Based Fast
Matrix-Vector Multiplication [0.0]
kernel matrix is typically dense and large-scale. Depending on the dimension of the feature space even the computation of all of its entries in reasonable time becomes a challenging task.
We propose the use of an ANOVA kernel, where we construct several kernels based on lower-dimensional feature spaces for which we provide fast algorithms realizing the matrix-vector products.
Based on a feature grouping approach, we then show how the fast matrix-vector products can be embedded into a learning method choosing kernel ridge regression and the preconditioned conjugate gradient solver.
arXiv Detail & Related papers (2021-11-19T10:29:39Z) - Fundamental Machine Learning Routines as Quantum Algorithms on a
Superconducting Quantum Computer [0.0]
Harrow-Hassidim-Lloyd algorithm is intended for solving the system of linear equations on quantum devices.
We present a numerical study of the performance of the algorithm when these caveats are not perfectly matched.
arXiv Detail & Related papers (2021-09-17T15:22:06Z) - Sparse Factorization of Large Square Matrices [10.94053598642913]
In this paper, we propose to approximate a large square matrix with a product of sparse full-rank matrices.
In the approximation, our method needs only $N(log N)2$ non-zero numbers for an $Ntimes N$ full matrix.
We show that our method gives a better approximation when the approximated matrix is sparse and high-rank.
arXiv Detail & Related papers (2021-09-16T18:42:21Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Sketching Transformed Matrices with Applications to Natural Language
Processing [76.6222695417524]
We propose a space-efficient sketching algorithm for computing the product of a given small matrix with the transformed matrix.
We show that our approach obtains small error and is efficient in both space and time.
arXiv Detail & Related papers (2020-02-23T03:07:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.