Convergence of the Cumulant Expansion and Polynomial-Time Algorithm for Weakly Interacting Fermions
- URL: http://arxiv.org/abs/2512.12010v1
- Date: Fri, 12 Dec 2025 20:00:39 GMT
- Title: Convergence of the Cumulant Expansion and Polynomial-Time Algorithm for Weakly Interacting Fermions
- Authors: Hongrui Chen, Cambyse Rouzé, Jielun Chen, Jiaqing Jiang, Samuel O. Scalet, Yongtao Zhan, Garnet Kin-Lic Chan, Lexing Ying, Yu Tong,
- Abstract summary: We propose a randomized algorithm to compute the log-partition function of weakly interacting fermions with runtime in both the system size and precision.<n>A key equation used to analyze the sum of connected Feynman diagrams reveals an underlying tree structure in the summation.
- Score: 12.980370087557622
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a randomized algorithm to compute the log-partition function of weakly interacting fermions with polynomial runtime in both the system size and precision. Although weakly interacting fermionic systems are considered tractable for many computational methods such as the diagrammatic quantum Monte Carlo, a mathematically rigorous proof of polynomial runtime has been lacking. In this work we first extend the proof techniques developed in previous works for proving the convergence of the cumulant expansion in periodic systems to the non-periodic case. A key equation used to analyze the sum of connected Feynman diagrams, which we call the tree-determinant expansion, reveals an underlying tree structure in the summation. This enables us to design a new randomized algorithm to compute the log-partition function through importance sampling augmented by belief propagation. This approach differs from the traditional method based on Markov chain Monte Carlo, whose efficiency is hard to guarantee, and enables us to obtain a algorithm with provable polynomial runtime.
Related papers
- Analytic Cancellation of Interference Terms and Closed-Form 1-Mode Marginals in Canonical Boson Sampling [4.653967722732359]
We provide a direct, bottom-up derivation of the exact 1-mode marginal distribution in CBS, computable in $mathcalORR time.<n>We explicitly bridge this physical derivation with the mathematical theory of rank-1 matrixs, proving that multiphoton interference reduces to a symmetric scaled by a factorialic bunching factor.
arXiv Detail & Related papers (2026-03-01T04:26:35Z) - A measurement-driven quantum algorithm for SAT: Performance guarantees via spectral gaps and measurement parallelization [39.146761527401424]
This work presents a rigorous worst-case runtime analysis of a measurement-driven quantum SAT solver.<n>We show that this quantum algorithm does not exclusively rely on Grover-type methods and shows promising numerical performance.<n>We also develop a new readout routine that efficiently finds a solution even for instances with multiple satisfying assignments.
arXiv Detail & Related papers (2025-11-12T19:00:13Z) - Are Randomized Quantum Linear Systems Solvers Practical? [0.0]
randomized quantum algorithms have been proposed in the context of quantum simulation and quantum linear algebra.<n>We provide explicit bounds on all relevant parameters that control the total error for a randomized quantum linear systems solver.<n>Our work serves as a bridge between theoretical algorithmic proposals and efficient hardware implementations.
arXiv Detail & Related papers (2025-10-15T17:12:55Z) - Phase estimation with partially randomized time evolution [36.989845156791525]
Quantum phase estimation combined with Hamiltonian simulation is the most promising algorithmic framework to computing ground state energies on quantum computers.<n>In this paper we use randomization to speed up product formulas, one of the standard approaches to Hamiltonian simulation.<n>We perform a detailed resource estimate for single-ancilla phase estimation using partially randomized product formulas for benchmark systems in quantum chemistry.
arXiv Detail & Related papers (2025-03-07T18:09:32Z) - Inferring Kernel $ε$-Machines: Discovering Structure in Complex Systems [49.1574468325115]
We introduce causal diffusion components that encode the kernel causal-state estimates as a set of coordinates in a reduced dimension space.
We show how each component extracts predictive features from data and demonstrate their application on four examples.
arXiv Detail & Related papers (2024-10-01T21:14:06Z) - Private graphon estimation via sum-of-squares [10.00024942014117]
We develop the first pure node-differentially private algorithms for learning block models and for graphon estimation with constant running time for any number of blocks.
The statistical utility guarantees match those of the previous best information-theoretic (expon-time) node-private mechanisms for these problems.
arXiv Detail & Related papers (2024-03-18T19:54:59Z) - Entropic Matching for Expectation Propagation of Markov Jump Processes [31.376561087029454]
We propose a novel, tractable latent inference scheme for Markov jump processes.<n>Our approach is based on an entropic matching framework that can be embedded into the well-known expectation propagation algorithm.
arXiv Detail & Related papers (2023-09-27T12:07:21Z) - Quasi-Monte Carlo Graph Random Features [38.762964164013226]
We present a novel mechanism to improve the accuracy of graph random features (GRFs)
Our method induces negative correlations between the lengths of the algorithm's random walks by imposing antithetic termination.
We derive strong theoretical guarantees on the properties of these quasi-Monte Carlo GRFs.
arXiv Detail & Related papers (2023-05-21T14:12:02Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.<n>We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.<n>Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Isotropic Gaussian Processes on Finite Spaces of Graphs [71.26737403006778]
We propose a principled way to define Gaussian process priors on various sets of unweighted graphs.
We go further to consider sets of equivalence classes of unweighted graphs and define the appropriate versions of priors thereon.
Inspired by applications in chemistry, we illustrate the proposed techniques on a real molecular property prediction task in the small data regime.
arXiv Detail & Related papers (2022-11-03T10:18:17Z) - Fermion Sampling Made More Efficient [19.50440110966488]
We propose a fermion sampling algorithm, which has a time-complexity -- in the fermion number and linear in the system size.
This algorithm is about 100% more efficient in time than the best known algorithms.
We demonstrate its power on several test applications, including sampling fermions in a many-body system and a machine learning task of text summarization.
arXiv Detail & Related papers (2021-09-15T15:11:33Z) - Sampling in Combinatorial Spaces with SurVAE Flow Augmented MCMC [83.48593305367523]
Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.
We introduce a new approach based on augmenting Monte Carlo methods with SurVAE Flows to sample from discrete distributions.
We demonstrate the efficacy of our algorithm on a range of examples from statistics, computational physics and machine learning, and observe improvements compared to alternative algorithms.
arXiv Detail & Related papers (2021-02-04T02:21:08Z) - Symplectic Gaussian Process Regression of Hamiltonian Flow Maps [0.8029049649310213]
We present an approach to construct appropriate and efficient emulators for Hamiltonian flow maps.
Intended future applications are long-term tracing of fast charged particles in accelerators and magnetic plasma confinement.
arXiv Detail & Related papers (2020-09-11T17:56:35Z) - A Distributional Analysis of Sampling-Based Reinforcement Learning
Algorithms [67.67377846416106]
We present a distributional approach to theoretical analyses of reinforcement learning algorithms for constant step-sizes.
We show that value-based methods such as TD($lambda$) and $Q$-Learning have update rules which are contractive in the space of distributions of functions.
arXiv Detail & Related papers (2020-03-27T05:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.