Quantum-inspired probability metrics define a complete, universal space for statistical learning
- URL: http://arxiv.org/abs/2508.21086v2
- Date: Sat, 06 Sep 2025 15:48:34 GMT
- Title: Quantum-inspired probability metrics define a complete, universal space for statistical learning
- Authors: Logan S. McCarty,
- Abstract summary: We introduce quantum probability metrics (QPMs), derived by embedding probability measures in the space of quantum states.<n>QPMs can significantly improve performance as a drop-in replacement for Maximum Mean Discrepancy (MMD)<n>This approach lays the foundation for powerful tools to analyze and manipulate probability measures.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Comparing probability distributions is a core challenge across the natural, social, and computational sciences. Existing methods, such as Maximum Mean Discrepancy (MMD), struggle in high-dimensional and non-compact domains. Here we introduce quantum probability metrics (QPMs), derived by embedding probability measures in the space of quantum states: positive, unit-trace operators on a Hilbert space. This construction extends kernel-based methods and overcomes the incompleteness of MMD on non-compact spaces. Viewed as an integral probability metric (IPM), QPMs have dual functions that uniformly approximate all bounded, uniformly continuous functions on $\mathbb{R}^n$, offering enhanced sensitivity to subtle distributional differences in high dimensions. For empirical distributions, QPMs are readily calculated using eigenvalue methods, with analytic gradients suited for learning and optimization. Although computationally more intensive for large sample sizes ($O(n^3)$ vs. $O(n^2)$), QPMs can significantly improve performance as a drop-in replacement for MMD, as demonstrated in a classic generative modeling task. By combining the rich mathematical framework of quantum mechanics with classical probability theory, this approach lays the foundation for powerful tools to analyze and manipulate probability measures.
Related papers
- Equivariant Evidential Deep Learning for Interatomic Potentials [55.6997213490859]
Uncertainty quantification is critical for assessing the reliability of machine learning interatomic potentials in molecular dynamics simulations.<n>Existing UQ approaches for MLIPs are often limited by high computational cost or suboptimal performance.<n>We propose textitEquivariant Evidential Deep Learning for Interatomic Potentials ($texte2$IP), a backbone-agnostic framework that models atomic forces and their uncertainty jointly.
arXiv Detail & Related papers (2026-02-11T02:00:25Z) - Against probability: A quantum state is more than a list of probability distributions [0.5729426778193397]
The state $$ of a quantum system can be represented by a vector $mathbfP_mathM() for a set of quantum measurements.<n>Such representations appear throughout physics, for example, in correlation and in quantum foundations.
arXiv Detail & Related papers (2026-01-26T19:00:01Z) - DO-EM: Density Operator Expectation Maximization [10.697014584408963]
We develop an Expectation-Maximization framework to learn latent variable models defined through textbfDOMs on classical hardware.<n>We show that the textbfDO-EM algorithm ensures non-decreasing log-likelihood across iterations for a broad class of models.<n>We present Quantum Interleaved Deep Boltzmann Machines (textbfQiDBMs), a textbfDOM that can be trained with the same resources as a DBM.
arXiv Detail & Related papers (2025-07-30T15:51:20Z) - Kernel Trace Distance: Quantum Statistical Metric between Measures through RKHS Density Operators [11.899035547580201]
We introduce a novel distance between measures that compares them through a Schatten norm of their kernel covariance operators.<n>We show that this new distance is an integral probability metric that can be framed between a Maximum Mean Discrepancy (MMD) and a Wasserstein distance.
arXiv Detail & Related papers (2025-07-08T14:56:44Z) - Kernel Quantile Embeddings and Associated Probability Metrics [12.484632369259659]
We introduce the notion of kernel quantile embeddings (KQEs)<n>We use KQEs to construct a family of distances that: (i) are probability metrics under weaker kernel conditions than MMD; (ii) recover a kernelised form of the sliced Wasserstein distance; and (iii) can be efficiently estimated with near-linear cost.
arXiv Detail & Related papers (2025-05-26T18:27:17Z) - Conditional Distribution Quantization in Machine Learning [83.54039134248231]
Conditional expectation mathbbE(Y mid X) often fails to capture the complexity of multimodal conditional distributions mathcalL(Y mid X)<n>We propose using n-point conditional quantizations--functional mappings of X that are learnable via gradient descent--to approximate mathcalL(Y mid X)
arXiv Detail & Related papers (2025-02-11T00:28:24Z) - Optimal Scaling for Locally Balanced Proposals in Discrete Spaces [65.14092237705476]
We show that efficiency of Metropolis-Hastings (M-H) algorithms in discrete spaces can be characterized by an acceptance rate that is independent of the target distribution.
Knowledge of the optimal acceptance rate allows one to automatically tune the neighborhood size of a proposal distribution in a discrete space, directly analogous to step-size control in continuous spaces.
arXiv Detail & Related papers (2022-09-16T22:09:53Z) - A Note on Optimizing Distributions using Kernel Mean Embeddings [94.96262888797257]
Kernel mean embeddings represent probability measures by their infinite-dimensional mean embeddings in a reproducing kernel Hilbert space.
We show that when the kernel is characteristic, distributions with a kernel sum-of-squares density are dense.
We provide algorithms to optimize such distributions in the finite-sample setting.
arXiv Detail & Related papers (2021-06-18T08:33:45Z) - Learning with Density Matrices and Random Features [44.98964870180375]
A density matrix describes the statistical state of a quantum system.
It is a powerful formalism to represent both the quantum and classical uncertainty of quantum systems.
This paper explores how density matrices can be used as a building block for machine learning models.
arXiv Detail & Related papers (2021-02-08T17:54:59Z) - Random quantum circuits anti-concentrate in log depth [118.18170052022323]
We study the number of gates needed for the distribution over measurement outcomes for typical circuit instances to be anti-concentrated.
Our definition of anti-concentration is that the expected collision probability is only a constant factor larger than if the distribution were uniform.
In both the case where the gates are nearest-neighbor on a 1D ring and the case where gates are long-range, we show $O(n log(n)) gates are also sufficient.
arXiv Detail & Related papers (2020-11-24T18:44:57Z) - Generalized Sliced Distances for Probability Distributions [47.543990188697734]
We introduce a broad family of probability metrics, coined as Generalized Sliced Probability Metrics (GSPMs)
GSPMs are rooted in the generalized Radon transform and come with a unique geometric interpretation.
We consider GSPM-based gradient flows for generative modeling applications and show that under mild assumptions, the gradient flow converges to the global optimum.
arXiv Detail & Related papers (2020-02-28T04:18:00Z) - Schoenberg-Rao distances: Entropy-based and geometry-aware statistical
Hilbert distances [12.729120803225065]
We study a class of statistical Hilbert distances that we term the Schoenberg-Rao distances.
We derive novel closed-form distances between mixtures of Gaussian distributions.
Our method constitutes a practical alternative to Wasserstein distances and we illustrate its efficiency on a broad range of machine learning tasks.
arXiv Detail & Related papers (2020-02-19T18:48:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.