Optimal sampling of tensor networks targeting wave function's fast
decaying tails
- URL: http://arxiv.org/abs/2401.10330v1
- Date: Thu, 18 Jan 2024 19:00:05 GMT
- Title: Optimal sampling of tensor networks targeting wave function's fast
decaying tails
- Authors: Marco Ballarin, Pietro Silvi, Simone Montangero and Daniel Jaschke
- Abstract summary: We introduce an optimal strategy to sample quantum outcomes of local measurement strings for isometric tensor network states.
The algorithm avoids sample repetition and, thus, is efficient at sampling distribution with exponentially decaying tails.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce an optimal strategy to sample quantum outcomes of local
measurement strings for isometric tensor network states. Our method generates
samples based on an exact cumulative bounding function, without prior
knowledge, in the minimal amount of tensor network contractions. The algorithm
avoids sample repetition and, thus, is efficient at sampling distribution with
exponentially decaying tails. We illustrate the computational advantage
provided by our optimal sampling method through various numerical examples,
involving condensed matter, optimization problems, and quantum circuit
scenarios. Theory predicts up to an exponential speedup reducing the scaling
for sampling the space up to an accumulated unknown probability $\epsilon$ from
$\mathcal{O}(\epsilon^{-1})$ to $\mathcal{O}(\log(\epsilon^{-1}))$ for a
decaying probability distribution. We confirm this in practice with over one
order of magnitude speedup or multiple orders improvement in the error
depending on the application. Our sampling strategy extends beyond local
observables, e.g., to quantum magic.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Faster Sampling via Stochastic Gradient Proximal Sampler [28.422547264326468]
Proximal samplers (SPS) for sampling from non-log-concave distributions are studied.
We show that the convergence to the target distribution can be guaranteed as long as the algorithm trajectory is bounded.
We provide two implementable variants based on Langevin dynamics (SGLD) and Langevin-MALA, giving rise to SPS-SGLD and SPS-MALA.
arXiv Detail & Related papers (2024-05-27T00:53:18Z) - Diffusion Posterior Sampling is Computationally Intractable [9.483130965295324]
Posterior sampling is useful for tasks such as inpainting, super-resolution, and MRI reconstruction.
We show that posterior sampling is emphcomputationally intractable: under the most basic assumption in cryptography, that one-way functions exist.
We also show that the exponential-time rejection sampling is essentially optimal under the stronger plausible assumption that there are one-way functions that take exponential time to invert.
arXiv Detail & Related papers (2024-02-20T05:28:13Z) - Sublinear scaling in non-Markovian open quantum systems simulations [0.0]
We introduce a numerically exact algorithm to calculate process tensors.
Our approach requires only $mathcalO(nlog n)$ singular value decompositions for environments with infinite memory.
arXiv Detail & Related papers (2023-04-11T15:40:33Z) - Hessian Averaging in Stochastic Newton Methods Achieves Superlinear
Convergence [69.65563161962245]
We consider a smooth and strongly convex objective function using a Newton method.
We show that there exists a universal weighted averaging scheme that transitions to local convergence at an optimal stage.
arXiv Detail & Related papers (2022-04-20T07:14:21Z) - Random quantum circuits transform local noise into global white noise [118.18170052022323]
We study the distribution over measurement outcomes of noisy random quantum circuits in the low-fidelity regime.
For local noise that is sufficiently weak and unital, correlations (measured by the linear cross-entropy benchmark) between the output distribution $p_textnoisy$ of a generic noisy circuit instance shrink exponentially.
If the noise is incoherent, the output distribution approaches the uniform distribution $p_textunif$ at precisely the same rate.
arXiv Detail & Related papers (2021-11-29T19:26:28Z) - Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors [59.33977545294148]
We show that $O(k log L)$ samples suffice to guarantee that the signal is close to any vector that minimizes an amplitude-based empirical loss function.
We adapt this result to sparse phase retrieval, and show that $O(s log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional.
arXiv Detail & Related papers (2021-06-29T12:49:54Z) - Bandit Samplers for Training Graph Neural Networks [63.17765191700203]
Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs)
These sampling algorithms are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT)
arXiv Detail & Related papers (2020-06-10T12:48:37Z) - Learning with Optimized Random Features: Exponential Speedup by Quantum
Machine Learning without Sparsity and Low-Rank Assumptions [8.08640000394814]
We develop a quantum algorithm for sampling from an optimized distribution over runtime features.
Our algorithm achieves an exponential speedup in $D$ compared to any known classical algorithm for this sampling task.
arXiv Detail & Related papers (2020-04-22T18:00:00Z) - Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation [72.40827239394565]
We propose to compute features only at sparsely sampled locations.
We then densely reconstruct the feature map with an efficient procedure.
The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.
arXiv Detail & Related papers (2020-03-19T15:36:31Z) - Online Sinkhorn: Optimal Transport distances from sample streams [0.0]
This paper introduces a new online estimator of entropy-regularized OT distances between two arbitrary distributions.
Compared to the classic Sinkhorn algorithm, our method leverages new samples at each iteration, which enables a consistent estimation of the true regularized OT distance.
arXiv Detail & Related papers (2020-03-03T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.