Shot-noise reduction for lattice Hamiltonians
- URL: http://arxiv.org/abs/2410.21251v2
- Date: Mon, 04 Nov 2024 14:01:00 GMT
- Title: Shot-noise reduction for lattice Hamiltonians
- Authors: Timo Eckstein, Refik Mansuroglu, Stefan Wolf, Ludwig Nützel, Stephan Tasler, Martin Kliesch, Michael J. Hartmann,
- Abstract summary: Efficiently estimating energy expectation values of lattice Hamiltonians on quantum computers is a serious challenge.
We introduce geometric partitioning as a scalable alternative.
We show how the sampling number improvement translates to imperfect eigenstate improvements.
- Score: 0.7852714805965528
- License:
- Abstract: Efficiently estimating energy expectation values of lattice Hamiltonians on quantum computers is a serious challenge, where established techniques can require excessive sample numbers. Here we introduce geometric partitioning as a scalable alternative. It splits the Hamiltonian into subsystems that extend over multiple lattice sites, for which transformations between their local eigenbasis and the computational basis can be efficiently found. This allows us to reduce the number of measurements as we sample from a more concentrated distribution without diagonalizing the problem. For systems in an energy eigenstate, we prove a lower bound on the sampling number improvement over the "naive" mutually commuting local operator grouping, which grows with the considered subsystem size, consistently showing an advantage for our geometric partitioning strategy. Notably, our lower bounds do not decrease but increase for more correlated states (Theorem 1). For states that are weakly isotropically perturbed around an eigenstate, we show how the sampling number improvement translates to imperfect eigenstate improvements, namely measuring close to the true eigenbasis already for smaller perturbations (Theorem 2). We illustrate our findings on multiple two-dimensional lattice models incl. the transverse field XY- and Ising model as well as the Fermi Hubbard model.
Related papers
- Optimizing random local Hamiltonians by dissipation [44.99833362998488]
We prove that a simplified quantum Gibbs sampling algorithm achieves a $Omega(frac1k)$-fraction approximation of the optimum.
Our results suggest that finding low-energy states for sparsified (quasi)local spin and fermionic models is quantumly easy but classically nontrivial.
arXiv Detail & Related papers (2024-11-04T20:21:16Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Matrix product state approximations to quantum states of low energy variance [0.3277163122167433]
We show how to efficiently simulate pure quantum states in one dimensional systems with finite energy density and vanishingly small energy fluctuations.
We prove that there exist states with a very narrow support in the bulk of the spectrum that still have moderate entanglement entropy.
arXiv Detail & Related papers (2023-07-11T12:10:15Z) - Lower Bounding Ground-State Energies of Local Hamiltonians Through the Renormalization Group [0.0]
We show how to formulate a tractable convex relaxation of the set of feasible local density matrices of a quantum system.
The coarse-graining maps of the underlying renormalization procedure serve to eliminate a vast number of those constraints.
This can be used to obtain rigorous lower bounds on the ground state energy of arbitrary local Hamiltonians.
arXiv Detail & Related papers (2022-12-06T14:39:47Z) - High-dimensional limit theorems for SGD: Effective dynamics and critical
scaling [6.950316788263433]
We prove limit theorems for the trajectories of summary statistics of gradient descent (SGD)
We show a critical scaling regime for the step-size, below which the effective ballistic dynamics matches gradient flow for the population loss.
About the fixed points of this effective dynamics, the corresponding diffusive limits can be quite complex and even degenerate.
arXiv Detail & Related papers (2022-06-08T17:42:18Z) - Surrogate models for quantum spin systems based on reduced order
modeling [0.0]
We present a methodology to investigate phase-diagrams of quantum models based on the principle of the reduced basis method (RBM)
We benchmark the method in two test cases, a chain of excited Rydberg atoms and a geometrically frustrated antiferromagnetic two-dimensional lattice model.
arXiv Detail & Related papers (2021-10-29T10:17:39Z) - Lifting the Convex Conjugate in Lagrangian Relaxations: A Tractable
Approach for Continuous Markov Random Fields [53.31927549039624]
We show that a piecewise discretization preserves better contrast from existing discretization problems.
We apply this theory to the problem of matching two images.
arXiv Detail & Related papers (2021-07-13T12:31:06Z) - Efficient and Flexible Approach to Simulate Low-Dimensional Quantum
Lattice Models with Large Local Hilbert Spaces [0.08594140167290096]
We introduce a mapping that allows to construct artificial $U(1)$ symmetries for any type of lattice model.
Exploiting the generated symmetries, numerical expenses that are related to the local degrees of freedom decrease significantly.
Our findings motivate an intuitive physical picture of the truncations occurring in typical algorithms.
arXiv Detail & Related papers (2020-08-19T14:13:56Z) - Debiased Sinkhorn barycenters [110.79706180350507]
Entropy regularization in optimal transport (OT) has been the driver of many recent interests for Wasserstein metrics and barycenters in machine learning.
We show how this bias is tightly linked to the reference measure that defines the entropy regularizer.
We propose debiased Wasserstein barycenters that preserve the best of both worlds: fast Sinkhorn-like iterations without entropy smoothing.
arXiv Detail & Related papers (2020-06-03T23:06:02Z) - Computationally efficient sparse clustering [67.95910835079825]
We provide a finite sample analysis of a new clustering algorithm based on PCA.
We show that it achieves the minimax optimal misclustering rate in the regime $|theta infty$.
arXiv Detail & Related papers (2020-05-21T17:51:30Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.