The State Preparation of Multivariate Normal Distributions using Tree Tensor Network
- URL: http://arxiv.org/abs/2412.12067v1
- Date: Mon, 16 Dec 2024 18:41:51 GMT
- Title: The State Preparation of Multivariate Normal Distributions using Tree Tensor Network
- Authors: Hidetaka Manabe, Yuichi Sano,
- Abstract summary: We propose a scalable method to generate state preparation circuits for $D$-dimensional multivariate normal distributions.
Based on these analyses, we propose a compilation method that uses automatic structural optimization to find the most efficient network structure and compact circuit.
- Score: 0.0
- License:
- Abstract: The quantum state preparation of probability distributions is an important subroutine for many quantum algorithms. When embedding $D$-dimensional multivariate probability distributions by discretizing each dimension into $2^n$ points, we need a state preparation circuit comprising a total of $nD$ qubits, which is often difficult to compile. In this study, we propose a scalable method to generate state preparation circuits for $D$-dimensional multivariate normal distributions, utilizing tree tensor networks (TTN). We establish theoretical guarantees that multivariate normal distributions with 1D correlation structures can be efficiently represented using TTN. Based on these analyses, we propose a compilation method that uses automatic structural optimization to find the most efficient network structure and compact circuit. We apply our method to state preparation circuits for various high-dimensional random multivariate normal distributions. The numerical results suggest that our method can dramatically reduce the circuit depth and CNOT count while maintaining fidelity compared to existing approaches.
Related papers
- Generative Conditional Distributions by Neural (Entropic) Optimal Transport [12.152228552335798]
We introduce a novel neural entropic optimal transport method designed to learn generative models of conditional distributions.
Our method relies on the minimax training of two neural networks.
Our experiments on real-world datasets show the effectiveness of our algorithm compared to state-of-the-art conditional distribution learning techniques.
arXiv Detail & Related papers (2024-06-04T13:45:35Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - A New Initial Distribution for Quantum Generative Adversarial Networks
to Load Probability Distributions [4.043200001974071]
We propose a novel method for generating an initial distribution that improves the learning efficiency of qGANs.
Our method uses the classical process of label replacement to generate various probability distributions in shallow quantum circuits.
arXiv Detail & Related papers (2023-06-21T14:33:35Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Matching Normalizing Flows and Probability Paths on Manifolds [57.95251557443005]
Continuous Normalizing Flows (CNFs) are generative models that transform a prior distribution to a model distribution by solving an ordinary differential equation (ODE)
We propose to train CNFs by minimizing probability path divergence (PPD), a novel family of divergences between the probability density path generated by the CNF and a target probability density path.
We show that CNFs learned by minimizing PPD achieve state-of-the-art results in likelihoods and sample quality on existing low-dimensional manifold benchmarks.
arXiv Detail & Related papers (2022-07-11T08:50:19Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - A Unified Framework for Multi-distribution Density Ratio Estimation [101.67420298343512]
Binary density ratio estimation (DRE) provides the foundation for many state-of-the-art machine learning algorithms.
We develop a general framework from the perspective of Bregman minimization divergence.
We show that our framework leads to methods that strictly generalize their counterparts in binary DRE.
arXiv Detail & Related papers (2021-12-07T01:23:20Z) - High-Dimensional Distribution Generation Through Deep Neural Networks [2.141079906482723]
We show that every $d$-dimensional probability distribution of bounded support can be generated through deep ReLU networks.
We find that, for histogram target distributions, the number of bits needed to encode the corresponding generative network equals the fundamental limit for encoding probability distributions.
arXiv Detail & Related papers (2021-07-26T20:35:52Z) - Unsupervised tree boosting for learning probability distributions [2.8444868155827634]
unsupervised tree boosting algorithm based on fitting additive tree ensembles.
Integral to the algorithm is a new notion of "residualization", i.e., subtracting a probability distribution from an observation to remove the distributional structure from the sampling distribution of the latter.
arXiv Detail & Related papers (2021-01-26T21:03:27Z) - SURF: A Simple, Universal, Robust, Fast Distribution Learning Algorithm [64.13217062232874]
SURF is an algorithm for approximating distributions by piecewises.
It outperforms state-of-the-art algorithms in experiments.
arXiv Detail & Related papers (2020-02-22T01:03:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.