Locally Purified Maximally Mixed States At Scale: Entanglement Pruning and Symmetries
- URL: http://arxiv.org/abs/2509.16439v1
- Date: Fri, 19 Sep 2025 21:38:29 GMT
- Title: Locally Purified Maximally Mixed States At Scale: Entanglement Pruning and Symmetries
- Authors: Amit Jamadagni, Eugene Dumitrescu,
- Abstract summary: Localized Purified Density Operators represent mixed quantum states at scale.<n>Given their non-uniqueness, their representational complexity is generally sub-optimal in practical computations.<n>We show how, by minimizing the resources required to represent key states of practical interest, the efficiency of tensor network algorithms can be substantially increased.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Locally Purified Density Operators (LPDOs) are state-of-the-art tensor network ansatze candidates that efficiently represent mixed quantum states at scale. However, given their non-uniqueness, their representational complexity is generally sub-optimal in practical computations. In this work we perform a comprehensive numerical and analytical analysis and resolve this issue in the experimentally relevant limit where noise depolarizes the density operator into a maximally mixed state. To resolve the sub-optimality issue, we analyze two numerical tools, one analytic method, and detail the relations between them. The numerical tools used are fidelity-preserving truncations and isometric gauge transformations leveraging Riemannian optimizations over entropic objective functions. In addition, by invoking the injectivity and symmetry constraints of the maximally mixed LPDO, we also present analytical closed-form expressions for the disentangler and discuss their relation to numerical optimizers. Our work shows how, by minimizing the resources required to represent key states of practical interest in experiment, the efficiency of tensor network algorithms can be substantially increased. This paves the path for uncovering tensor network's fundamental scalability limits and latent potential in representing the wide locus of mixed quantum states that are accessible on near-term quantum devices.
Related papers
- Equivariant Evidential Deep Learning for Interatomic Potentials [55.6997213490859]
Uncertainty quantification is critical for assessing the reliability of machine learning interatomic potentials in molecular dynamics simulations.<n>Existing UQ approaches for MLIPs are often limited by high computational cost or suboptimal performance.<n>We propose textitEquivariant Evidential Deep Learning for Interatomic Potentials ($texte2$IP), a backbone-agnostic framework that models atomic forces and their uncertainty jointly.
arXiv Detail & Related papers (2026-02-11T02:00:25Z) - 1d-qt-ideal-solver: 1D Idealized Quantum Tunneling Solver with Absorbing Boundaries [0.0]
1d-qt-ideal-solver is an open-source Python library for simulating quantum tunneling dynamics.<n>Numba just-in-time compilation achieves performance comparable to compiled languages.
arXiv Detail & Related papers (2025-12-27T16:13:44Z) - Schrodinger Neural Network and Uncertainty Quantification: Quantum Machine [0.0]
We introduce the Schrodinger Neural Network (SNN), a principled architecture for conditional density estimation and uncertainty.<n>The SNN maps each input to a normalized wave function on the output domain and computes predictive probabilities via the Born rule.
arXiv Detail & Related papers (2025-10-27T15:52:47Z) - Learning Overspecified Gaussian Mixtures Exponentially Fast with the EM Algorithm [5.625796693054093]
We investigate the convergence properties of the EM algorithm when applied to overspecified Gaussian mixture models.<n>We demonstrate that the population EM algorithm converges exponentially fast in terms of the Kullback-Leibler (KL) distance.
arXiv Detail & Related papers (2025-06-13T14:57:57Z) - TensorGRaD: Tensor Gradient Robust Decomposition for Memory-Efficient Neural Operator Training [91.8932638236073]
We introduce textbfTensorGRaD, a novel method that directly addresses the memory challenges associated with large-structured weights.<n>We show that sparseGRaD reduces total memory usage by over $50%$ while maintaining and sometimes even improving accuracy.
arXiv Detail & Related papers (2025-01-04T20:51:51Z) - Gauge-Fixing Quantum Density Operators At Scale [0.0]
We provide theory, algorithms, and simulations of non-equilibrium quantum systems.<n>We analytically and numerically examine the virtual freedoms associated with the representation of quantum density operators.
arXiv Detail & Related papers (2024-11-05T22:56:13Z) - Optimising the relative entropy under semidefinite constraints [0.0]
Finding the minimal relative entropy of two quantum states under semidefinite constraints is a pivotal problem in quantum information theory.<n>We build on a recently introduced integral representation of quantum relative entropy by [Frenkel, Quantum 7, 1102 (2023) and provide reliable bounds as a sequence of semidefinite programs (SDPs)<n>Our approach ensures provable sublinear convergence in the discretization, while also maintaining resource efficiency in terms of SDP matrix dimensions.
arXiv Detail & Related papers (2024-04-25T20:19:47Z) - Fast Shapley Value Estimation: A Unified Approach [71.92014859992263]
We propose a straightforward and efficient Shapley estimator, SimSHAP, by eliminating redundant techniques.
In our analysis of existing approaches, we observe that estimators can be unified as a linear transformation of randomly summed values from feature subsets.
Our experiments validate the effectiveness of our SimSHAP, which significantly accelerates the computation of accurate Shapley values.
arXiv Detail & Related papers (2023-11-02T06:09:24Z) - Posterior Contraction Rates for Mat\'ern Gaussian Processes on
Riemannian Manifolds [51.68005047958965]
We show that intrinsic Gaussian processes can achieve better performance in practice.
Our work shows that finer-grained analyses are needed to distinguish between different levels of data-efficiency.
arXiv Detail & Related papers (2023-09-19T20:30:58Z) - D4FT: A Deep Learning Approach to Kohn-Sham Density Functional Theory [79.50644650795012]
We propose a deep learning approach to solve Kohn-Sham Density Functional Theory (KS-DFT)
We prove that such an approach has the same expressivity as the SCF method, yet reduces the computational complexity.
In addition, we show that our approach enables us to explore more complex neural-based wave functions.
arXiv Detail & Related papers (2023-03-01T10:38:10Z) - Operator relaxation and the optimal depth of classical shadows [0.0]
We study the sample complexity of learning the expectation value of Pauli operators via shallow shadows''
We show that the shadow norm is expressed in terms of properties of the Heisenberg time evolution of operators under the randomizing circuit.
arXiv Detail & Related papers (2022-12-22T18:46:46Z) - Sampling with Mollified Interaction Energy Descent [57.00583139477843]
We present a new optimization-based method for sampling called mollified interaction energy descent (MIED)
MIED minimizes a new class of energies on probability measures called mollified interaction energies (MIEs)
We show experimentally that for unconstrained sampling problems our algorithm performs on par with existing particle-based algorithms like SVGD.
arXiv Detail & Related papers (2022-10-24T16:54:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.