Fermion Sampling Made More Efficient
- URL: http://arxiv.org/abs/2109.07358v1
- Date: Wed, 15 Sep 2021 15:11:33 GMT
- Title: Fermion Sampling Made More Efficient
- Authors: Haoran Sun, Jie Zou and Xiaopeng Li
- Abstract summary: We propose a fermion sampling algorithm, which has a time-complexity -- in the fermion number and linear in the system size.
This algorithm is about 100% more efficient in time than the best known algorithms.
We demonstrate its power on several test applications, including sampling fermions in a many-body system and a machine learning task of text summarization.
- Score: 19.50440110966488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fermion sampling is to generate probability distribution of a many-body
Slater-determinant wavefunction, which is termed "determinantal point process"
in statistical analysis. For its inherently-embedded Pauli exclusion principle,
its application reaches beyond simulating fermionic quantum many-body physics
to constructing machine learning models for diversified datasets. Here we
propose a fermion sampling algorithm, which has a polynomial time-complexity --
quadratic in the fermion number and linear in the system size. This algorithm
is about 100% more efficient in computation time than the best known
algorithms. In sampling the corresponding marginal distribution, our algorithm
has a more drastic improvement, achieving a scaling advantage. We demonstrate
its power on several test applications, including sampling fermions in a
many-body system and a machine learning task of text summarization, and confirm
its improved computation efficiency over other methods by counting
floating-point operations.
Related papers
- Convergence of the Cumulant Expansion and Polynomial-Time Algorithm for Weakly Interacting Fermions [12.980370087557622]
We propose a randomized algorithm to compute the log-partition function of weakly interacting fermions with runtime in both the system size and precision.<n>A key equation used to analyze the sum of connected Feynman diagrams reveals an underlying tree structure in the summation.
arXiv Detail & Related papers (2025-12-12T20:00:39Z) - Are Randomized Quantum Linear Systems Solvers Practical? [0.0]
randomized quantum algorithms have been proposed in the context of quantum simulation and quantum linear algebra.<n>We provide explicit bounds on all relevant parameters that control the total error for a randomized quantum linear systems solver.<n>Our work serves as a bridge between theoretical algorithmic proposals and efficient hardware implementations.
arXiv Detail & Related papers (2025-10-15T17:12:55Z) - Energy-Efficient Sampling Using Stochastic Magnetic Tunnel Junctions [0.6990493129893112]
We introduce an energy-efficient algorithm for uniform Float16 sampling using a room-temperature magnetic tunnel junction device.
We decompose arbitrary distributions into many non-overlapping approximative uniform distributions along with convolution and prior-likelihood operations.
arXiv Detail & Related papers (2024-12-14T23:24:28Z) - Parallel simulation for sampling under isoperimetry and score-based diffusion models [56.39904484784127]
As data size grows, reducing the iteration cost becomes an important goal.
Inspired by the success of the parallel simulation of the initial value problem in scientific computation, we propose parallel Picard methods for sampling tasks.
Our work highlights the potential advantages of simulation methods in scientific computation for dynamics-based sampling and diffusion models.
arXiv Detail & Related papers (2024-12-10T11:50:46Z) - Gaussian Processes Sampling with Sparse Grids under Additive Schwarz Preconditioner [6.408773096179187]
We propose a scalable algorithm for sampling random realizations of the prior and posterior of GP models.
The proposed algorithm leverages inducing points approximation with sparse grids, as well as additive Schwarz preconditioners.
arXiv Detail & Related papers (2024-08-01T00:19:36Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Importance sampling for stochastic quantum simulations [68.8204255655161]
We introduce the qDrift protocol, which builds random product formulas by sampling from the Hamiltonian according to the coefficients.
We show that the simulation cost can be reduced while achieving the same accuracy, by considering the individual simulation cost during the sampling stage.
Results are confirmed by numerical simulations performed on a lattice nuclear effective field theory.
arXiv Detail & Related papers (2022-12-12T15:06:32Z) - A Stochastic Newton Algorithm for Distributed Convex Optimization [62.20732134991661]
We analyze a Newton algorithm for homogeneous distributed convex optimization, where each machine can calculate gradients of the same population objective.
We show that our method can reduce the number, and frequency, of required communication rounds compared to existing methods without hurting performance.
arXiv Detail & Related papers (2021-10-07T17:51:10Z) - Sigma-Delta and Distributed Noise-Shaping Quantization Methods for
Random Fourier Features [73.25551965751603]
We prove that our quantized RFFs allow a high accuracy approximation of the underlying kernels.
We show that the quantized RFFs can be further compressed, yielding an excellent trade-off between memory use and accuracy.
We empirically show by testing the performance of our methods on several machine learning tasks that our method compares favorably to other state of the art quantization methods in this context.
arXiv Detail & Related papers (2021-06-04T17:24:47Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Quantum-enhanced analysis of discrete stochastic processes [0.8057006406834467]
We propose a quantum algorithm for calculating the characteristic function of a Discrete processes (DSP)
It completely defines its probability distribution, using the number of quantum circuit elements that grows only linearly with the number of time steps.
The algorithm takes all trajectories into account and hence eliminates the need of importance sampling.
arXiv Detail & Related papers (2020-08-14T16:07:35Z) - Quantum Sampling Algorithms for Near-Term Devices [0.0]
We introduce a family of quantum algorithms that provide unbiased samples by encoding the entire Gibbs distribution.
We show that this approach leads to a speedup over a classical Markov chain algorithm.
It opens the door to exploring potentially useful sampling algorithms on near-term quantum devices.
arXiv Detail & Related papers (2020-05-28T14:46:20Z) - Learning with Optimized Random Features: Exponential Speedup by Quantum
Machine Learning without Sparsity and Low-Rank Assumptions [8.08640000394814]
We develop a quantum algorithm for sampling from an optimized distribution over runtime features.
Our algorithm achieves an exponential speedup in $D$ compared to any known classical algorithm for this sampling task.
arXiv Detail & Related papers (2020-04-22T18:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.