Trimmed Sampling Algorithm for the Noisy Generalized Eigenvalue Problem
- URL: http://arxiv.org/abs/2209.02083v2
- Date: Thu, 30 Mar 2023 15:34:29 GMT
- Title: Trimmed Sampling Algorithm for the Noisy Generalized Eigenvalue Problem
- Authors: Caleb Hicks and Dean Lee
- Abstract summary: Solving the generalized eigenvalue problem is a useful method for finding energy eigenstates of large quantum systems.
The problem is especially bad when matrix elements are evaluated using methods and have significant error bars.
We introduce the trimmed sampling algorithm in order to solve this problem.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Solving the generalized eigenvalue problem is a useful method for finding
energy eigenstates of large quantum systems. It uses projection onto a set of
basis states which are typically not orthogonal. One needs to invert a matrix
whose entries are inner products of the basis states, and the process is
unfortunately susceptible to even small errors. The problem is especially bad
when matrix elements are evaluated using stochastic methods and have
significant error bars. In this work, we introduce the trimmed sampling
algorithm in order to solve this problem. Using the framework of Bayesian
inference, we sample prior probability distributions determined by uncertainty
estimates of the various matrix elements and likelihood functions composed of
physics-informed constraints. The result is a probability distribution for the
eigenvectors and observables which automatically comes with a reliable estimate
of the error and performs far better than standard regularization methods. The
method should have immediate use for a wide range of applications involving
classical and quantum computing calculations of large quantum systems.
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Randomized semi-quantum matrix processing [0.0]
We present a hybrid quantum-classical framework for simulating generic matrix functions.
The method is based on randomization over the Chebyshev approximation of the target function.
We prove advantages on average depths, including quadratic speed-ups on costly parameters.
arXiv Detail & Related papers (2023-07-21T18:00:28Z) - Quantum Worst-Case to Average-Case Reductions for All Linear Problems [66.65497337069792]
We study the problem of designing worst-case to average-case reductions for quantum algorithms.
We provide an explicit and efficient transformation of quantum algorithms that are only correct on a small fraction of their inputs into ones that are correct on all inputs.
arXiv Detail & Related papers (2022-12-06T22:01:49Z) - Infinite-Dimensional Sparse Learning in Linear System Identification [0.2867517731896504]
This paper proposes an infinite-dimensional sparse learning algorithm based on atomic norm regularization.
The difficulty in solving the problem lies in the fact that there are an infinite number of possible atomic models.
arXiv Detail & Related papers (2022-03-28T13:18:48Z) - Test Set Sizing Via Random Matrix Theory [91.3755431537592]
This paper uses techniques from Random Matrix Theory to find the ideal training-testing data split for a simple linear regression.
It defines "ideal" as satisfying the integrity metric, i.e. the empirical model error is the actual measurement noise.
This paper is the first to solve for the training and test size for any model in a way that is truly optimal.
arXiv Detail & Related papers (2021-12-11T13:18:33Z) - Quantum algorithms for the generalized eigenvalue problem [6.111964049119244]
generalized eigenvalue (GE) problems are of particular importance in various areas of science engineering and machine learning.
We present a variational quantum algorithm for finding the desired generalized eigenvalue of the GE problem, $mathcalA|psirangle=lambdamathcalB|psirangle$, by choosing suitable loss functions.
We numerically implement our algorithms to conduct a 2-qubit simulation and successfully find the generalized eigenvalues of the matrix pencil $(mathcalA,,mathcalB)$
arXiv Detail & Related papers (2021-12-05T12:12:49Z) - A theory of quantum subspace diagonalization [3.248953303528541]
We show that a quantum subspace diagonalization algorithm can accurately compute the smallest eigenvalue of a large Hermitian matrix.
Our results can be of independent interest to solving eigenvalue problems outside the context of quantum computation.
arXiv Detail & Related papers (2021-10-14T16:09:07Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Bootstrapping the error of Oja's Algorithm [16.017328736786922]
We consider the problem of quantifying uncertainty for the estimation error of the leading eigenvector from Oja's algorithm for streaming principal component analysis.
We propose a multiplier bootstrap algorithm that may be updated in an online manner.
arXiv Detail & Related papers (2021-06-28T17:27:26Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.