Generalized and Unified Equivalences between Hardness and Pseudoentropy
- URL: http://arxiv.org/abs/2507.05972v2
- Date: Thu, 04 Sep 2025 15:11:20 GMT
- Title: Generalized and Unified Equivalences between Hardness and Pseudoentropy
- Authors: Lunjia Hu, Salil Vadhan,
- Abstract summary: We prove a unified pseudoentropy characterization that generalizes and strengthens previous results for both uniform and non-uniform models of computation.<n>Our characterization holds for a general family of entropy notions that encompasses the common notions of Shannon entropy and min entropy as special cases.
- Score: 6.683376336762599
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pseudoentropy characterizations provide a quantitatively precise demonstration of the close relationship between computational hardness and computational randomness. We prove a unified pseudoentropy characterization that generalizes and strengthens previous results for both uniform and non-uniform models of computation. Our characterization holds for a general family of entropy notions that encompasses the common notions of Shannon entropy and min entropy as special cases. Moreover, we show that the characterizations for different entropy notions can be simultaneously achieved by a single, universal function that simultaneously witnesses computational hardness and computational randomness. A key technical insight of our work is that the notion of weight-restricted calibration from the recent literature on algorithm fairness, along with standard computational indistinguishability (known as multiaccuracy in the fairness literature), suffices for proving pseudoentropy characterizations for general entropy notions. This demonstrates the power of weight-restricted calibration to enhance the classic Complexity-Theoretic Regularity Lemma (Trevisan, Tulsiani, and Vadhan, 2009) and Leakage Simulation Lemma (Jetchev and Pietrzak, 2014) and allows us to achieve an exponential improvement in the complexity dependency on the alphabet size compared to the pseudoentropy characterizations by Casacuberta, Dwork, and Vadhan (2024) based on the much stronger notion of multicalibration. We show that the exponential dependency on the alphabet size is inevitable for multicalibration as well as for the weaker notion of calibrated multiaccuracy.
Related papers
- Efficient Linear Attention for Multivariate Time Series Modeling via Entropy Equality [30.606567864965243]
We propose a novel linear attention mechanism designed to overcome limitations.<n>We develop an efficient algorithm that computes the entropy of dot-product-derived distributions with only linear complexity.
arXiv Detail & Related papers (2025-11-05T05:07:55Z) - Uniform Additivity of Tripartite Optimized Correlation Measures [6.624754673303328]
We search for optimized linear entropic functions of quantum states whose regularized versions are tractable to compute.<n>We rely on strong subadditivity of the von Neumann entropy to prove that three previously established tripartite optimized correlation measures are additive.
arXiv Detail & Related papers (2024-12-24T18:28:29Z) - Efficient Pseudomode Representation and Complexity of Quantum Impurity Models [0.7373617024876725]
Out-of-equilibrium fermionic quantum impurity models (QIM) describe a small interacting system coupled to a continuous fermionic bath.
We find efficient bath representations as that of approximating a kernel of the bath's Feynman-Vernon influence functional by a sum of complex exponentials.
To relate our findings to QIM, we derive an explicit Liouvillian that describes the time evolution of the combined impurity-pseudomodes system.
arXiv Detail & Related papers (2024-09-13T13:31:53Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
We present a unifying perspective on recent results on ridge regression.<n>We use the basic tools of random matrix theory and free probability, aimed at readers with backgrounds in physics and deep learning.<n>Our results extend and provide a unifying perspective on earlier models of scaling laws.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Conditional Independence of 1D Gibbs States with Applications to Efficient Learning [0.0]
We show that spin chains in thermal equilibrium have a correlation structure in which individual regions are strongly correlated at most with their near vicinity.<n>We prove that these measures decay superexponentially at every positive temperature.
arXiv Detail & Related papers (2024-02-28T17:28:01Z) - Complexity in two-point measurement schemes [0.0]
We show that the probability distribution associated with the change of an observable in a two-point measurement protocol with a perturbation can be written as an auto-correlation function.
We probe how the evolved state spreads in the corresponding conjugate space.
We show that the complexity saturates for large values of the parameter only when the pre-quench Hamiltonian is chaotic.
arXiv Detail & Related papers (2023-11-14T04:00:31Z) - Entanglement and entropy in multipartite systems: a useful approach [0.0]
We show how the notion of concurrence vector, re-expressed in a particularly useful form, provides new insights and computational tools.
The approach is also useful to derive sufficient conditions for genuine entanglement in generic multipartite systems.
arXiv Detail & Related papers (2023-07-11T12:20:30Z) - Statistical Properties of the Entropy from Ordinal Patterns [55.551675080361335]
Knowing the joint distribution of the pair Entropy-Statistical Complexity for a large class of time series models would allow statistical tests that are unavailable to date.
We characterize the distribution of the empirical Shannon's Entropy for any model under which the true normalized Entropy is neither zero nor one.
We present a bilateral test that verifies if there is enough evidence to reject the hypothesis that two signals produce ordinal patterns with the same Shannon's Entropy.
arXiv Detail & Related papers (2022-09-15T23:55:58Z) - Improved Quantum Algorithms for Fidelity Estimation [77.34726150561087]
We develop new and efficient quantum algorithms for fidelity estimation with provable performance guarantees.
Our algorithms use advanced quantum linear algebra techniques, such as the quantum singular value transformation.
We prove that fidelity estimation to any non-trivial constant additive accuracy is hard in general.
arXiv Detail & Related papers (2022-03-30T02:02:16Z) - The variance of relative surprisal as single-shot quantifier [0.0]
We show that (relative) surprisal gives sufficient conditions for approximate state-transitions between pairs of quantum states in single-shot setting.
We further derive a simple and physically appealing axiomatic single-shot characterization of (relative) entropy.
arXiv Detail & Related papers (2020-09-17T16:06:54Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - Profile Entropy: A Fundamental Measure for the Learnability and
Compressibility of Discrete Distributions [63.60499266361255]
We show that for samples of discrete distributions, profile entropy is a fundamental measure unifying the concepts of estimation, inference, and compression.
Specifically, profile entropy a) determines the speed of estimating the distribution relative to the best natural estimator; b) characterizes the rate of inferring all symmetric properties compared with the best estimator over any label-invariant distribution collection; c) serves as the limit of profile compression.
arXiv Detail & Related papers (2020-02-26T17:49:04Z) - Fast approximations in the homogeneous Ising model for use in scene
analysis [61.0951285821105]
We provide accurate approximations that make it possible to numerically calculate quantities needed in inference.
We show that our approximation formulae are scalable and unfazed by the size of the Markov Random Field.
The practical import of our approximation formulae is illustrated in performing Bayesian inference in a functional Magnetic Resonance Imaging activation detection experiment, and also in likelihood ratio testing for anisotropy in the spatial patterns of yearly increases in pistachio tree yields.
arXiv Detail & Related papers (2017-12-06T14:24:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.