Enhanced Convergence of Quantum Typicality using a Randomized Low-Rank
Approximation
- URL: http://arxiv.org/abs/2102.02293v2
- Date: Wed, 17 Feb 2021 18:21:04 GMT
- Title: Enhanced Convergence of Quantum Typicality using a Randomized Low-Rank
Approximation
- Authors: Phillip Weinberg
- Abstract summary: We present a method to reduce the variance of trace estimators used in quantum typicality (QT) methods via a randomized low-rank approximation.
The trace can be evaluated with higher accuracy in the low-rank subspace while using the QT estimator to approximate the trace in the complementary subspace.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a method to reduce the variance of stochastic trace estimators
used in quantum typicality (QT) methods via a randomized low-rank approximation
of the finite-temperature density matrix $e^{-\beta H}$. The trace can be
evaluated with higher accuracy in the low-rank subspace while using the QT
estimator to approximate the trace in the complementary subspace. We present
two variants of the trace estimator and demonstrate their efficacy using
numerical experiments. The experiments show that the low-rank approximation
outperforms the standard QT trace estimator for moderate- to low-temperature.
We argue this is due to the low-rank approximation accurately represent the
density matrix at low temperatures, allowing for accurate results for the
trace.
Related papers
- Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - Statistical Efficiency of Score Matching: The View from Isoperimetry [96.65637602827942]
We show a tight connection between statistical efficiency of score matching and the isoperimetric properties of the distribution being estimated.
We formalize these results both in the sample regime and in the finite regime.
arXiv Detail & Related papers (2022-10-03T06:09:01Z) - Multiclass histogram-based thresholding using kernel density estimation
and scale-space representations [0.0]
We present a new method for multiclass thresholding of a histogram based on the nonparametric Kernel Density (KD) estimation.
The method compares the number of extracted minima of the KD estimate with the number of the requested clusters minus one.
We verify the method using synthetic histograms with known threshold values and using the histogram of real X-ray computed tomography images.
arXiv Detail & Related papers (2022-02-10T01:03:43Z) - Nonconvex Stochastic Scaled-Gradient Descent and Generalized Eigenvector
Problems [98.34292831923335]
Motivated by the problem of online correlation analysis, we propose the emphStochastic Scaled-Gradient Descent (SSD) algorithm.
We bring these ideas together in an application to online correlation analysis, deriving for the first time an optimal one-time-scale algorithm with an explicit rate of local convergence to normality.
arXiv Detail & Related papers (2021-12-29T18:46:52Z) - Accuracy of the typicality approach using Chebyshev polynomials [0.0]
Trace estimators allow to approximate thermodynamic equilibrium observables with astonishing accuracy.
Here we report an approach which employs Chebyshev an alternative approach describing the exponential expansion of space weights.
This method turns out to be also very accurate in general, but shows systematic inaccuracies at low temperatures.
arXiv Detail & Related papers (2021-04-27T14:23:36Z) - Parameterized Temperature Scaling for Boosting the Expressive Power in
Post-Hoc Uncertainty Calibration [57.568461777747515]
We introduce a novel calibration method, Parametrized Temperature Scaling (PTS)
We demonstrate that the performance of accuracy-preserving state-of-the-art post-hoc calibrators is limited by their intrinsic expressive power.
We show with extensive experiments that our novel accuracy-preserving approach consistently outperforms existing algorithms across a large number of model architectures, datasets and metrics.
arXiv Detail & Related papers (2021-02-24T10:18:30Z) - Low-rank Characteristic Tensor Density Estimation Part I: Foundations [38.05393186002834]
We propose a novel approach that builds upon tensor factorization tools.
In order to circumvent the curse of dimensionality, we introduce a low-rank model of this characteristic tensor.
We demonstrate the very promising performance of the proposed method using several measured datasets.
arXiv Detail & Related papers (2020-08-27T18:06:19Z) - Mean-squared-error-based adaptive estimation of pure quantum states and
unitary transformations [0.0]
We propose a method to estimate with high accuracy pure quantum states of a single qudit.
Our method is based on the minimization of the squared error between the complex probability amplitudes of the unknown state and its estimate.
We show that our estimation procedure can be easily extended to estimate unknown unitary transformations acting on a single qudit.
arXiv Detail & Related papers (2020-08-23T00:32:10Z) - Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks [78.76880041670904]
In neural networks with binary activations and or binary weights the training by gradient descent is complicated.
We propose a new method for this estimation problem combining sampling and analytic approximation steps.
We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models.
arXiv Detail & Related papers (2020-06-04T21:51:21Z) - Minimax Optimal Estimation of KL Divergence for Continuous Distributions [56.29748742084386]
Esting Kullback-Leibler divergence from identical and independently distributed samples is an important problem in various domains.
One simple and effective estimator is based on the k nearest neighbor between these samples.
arXiv Detail & Related papers (2020-02-26T16:37:37Z) - Oracle Lower Bounds for Stochastic Gradient Sampling Algorithms [39.746670539407084]
We consider the problem of sampling from a strongly log-concave density in $bbRd$.
We prove an information theoretic lower bound on the number of gradient queries of the log density needed.
arXiv Detail & Related papers (2020-02-01T23:46:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.