Coupled Entropy: A Goldilocks Generalization for Nonextensive Statistical Mechanics
- URL: http://arxiv.org/abs/2506.17229v2
- Date: Sun, 13 Jul 2025 22:29:23 GMT
- Title: Coupled Entropy: A Goldilocks Generalization for Nonextensive Statistical Mechanics
- Authors: Kenric P. Nelson,
- Abstract summary: Evidence is presented that the accuracy of Nonextensive Statistical Mechanics framework is improved using the coupled entropy.<n>The training of the coupled variational autoencoder is an example of the unique ability of the coupled entropy to improve the performance of complex systems.
- Score: 2.1756081703276
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Evidence is presented that the accuracy of Nonextensive Statistical Mechanics framework is improved using the coupled entropy, which carefully establishes the physical measures of complex systems. While Nonextensive Statistical Mechanics (NSM) has developed into a powerful toolset, questions have persisted as to how to evaluate whether its proposed solutions properly characterize the uncertainty of heavy-tailed distributions. The entropy of the generalized Pareto distribution (GPD) is $1+\kappa+\ln\sigma$, where $\kappa$ is the shape or nonlinear coupling and $\sigma$ is the scale. A generalized entropy should retain the uncertainty due to the scale, while minimizing the dependence of the nonlinear coupling. The Tsallis entropy of the GPD instead subtracts a function of the inverse-scale and converges to one as $\kappa\rightarrow\infty$. Colloquially, the Tsallis entropy is too cold. The normalized Tsallis entropy (NTE) rectifies the positive dependence on the scale but introduces a nonlinear term multiplying the scale and the coupling, making it too hot. The coupled entropy measures the uncertainty of the GPD to be $1+\ln_\frac{\kappa}{1+\kappa}\sigma=1+\frac{1+\kappa}{\kappa}(\sigma^\frac{\kappa}{1+\kappa}-1)$, which converges to $\sigma$ as $\kappa\rightarrow\infty$. One could say, the coupled entropy allows scientists, engineers, and analysts to eat their porridge, confident that its measure of uncertainty reflects the mathematical physics of the scale of non-exponential distributions while minimizing the dependence on the shape or nonlinear coupling. The training of the coupled variational autoencoder is an example of the unique ability of the coupled entropy to improve the performance of complex systems.
Related papers
- Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Symmetry shapes thermodynamics of macroscopic quantum systems [0.0]
We show that the entropy of a system can be described in terms of group-theoretical quantities.
We apply our technique to generic $N$ identical interacting $d$-level quantum systems.
arXiv Detail & Related papers (2024-02-06T18:13:18Z) - Microscopic Legendre Transform, Canonical Ensemble and Jaynes' Maximum Entropy Principle [0.0]
Legendre transform between thermodynamic quantities such as the Helmholtz free energy and entropy plays a key role in the formulation of the canonical ensemble.<n>In this article, we formulate a microscopic version of the transform between the free energy and Shannon entropy of the system.<n>We focus on the exact differential property of Shannon entropy, utilizing it to derive central relations within the canonical ensemble.
arXiv Detail & Related papers (2023-12-21T11:41:01Z) - Eigenstate entanglement entropy in the integrable spin-$\rac{1}{2}$ XYZ
model [0.0]
We study the average and the standard deviation of the entanglement entropy of highly excited eigenstates of the integrable interacting spin-$frac12$ XYZ chain.
We find that the average eigenstate entanglement entropy exhibits a volume-law coefficient that is smaller than that universally of quantum-chaotic interacting models.
arXiv Detail & Related papers (2023-11-17T19:00:09Z) - Lower Complexity Adaptation for Empirical Entropic Optimal Transport [0.0]
Entropic optimal transport (EOT) presents an effective and computationally viable alternative to unregularized optimal transport (OT)
We derive novel statistical bounds for empirical plug-in estimators of the EOT cost.
Our techniques employ empirical process theory and rely on a dual formulation of EOT over a single function class.
arXiv Detail & Related papers (2023-06-23T16:06:13Z) - On Entropy Growth in Perturbative Scattering [0.0]
We study the change in subsystem entropy generated by dynamical unitary evolution of a product state in a bipartite system.
Remarkably, for the case of particle scattering, the circuit diagrams corresponding to $n$-Tsallis entropy are the same as the on-shell diagrams.
arXiv Detail & Related papers (2023-04-25T18:00:01Z) - Average entanglement entropy of midspectrum eigenstates of
quantum-chaotic interacting Hamiltonians [0.0]
We show that the magnitude of the negative $O(1)$ correction is only slightly greater than the one predicted for random pure states.
We derive a simple expression that describes the numerically observed $nu$ dependence of the $O(1)$ deviation from the prediction for random pure states.
arXiv Detail & Related papers (2023-03-23T18:00:02Z) - Self-healing of Trotter error in digital adiabatic state preparation [52.77024349608834]
We prove that the first-order Trotterization of a complete adiabatic evolution has a cumulative infidelity that scales as $mathcal O(T-2 delta t2)$ instead of $mathcal O(T2delta t2)$ expected from general Trotter error bounds.
This result suggests a self-healing mechanism and explains why, despite increasing $T$, infidelities for fixed-$delta t$ digitized evolutions still decrease for a wide variety of Hamiltonians.
arXiv Detail & Related papers (2022-09-13T18:05:07Z) - Maximum Entropy Reinforcement Learning with Mixture Policies [54.291331971813364]
We construct a tractable approximation of the mixture entropy using MaxEnt algorithms.
We show that it is closely related to the sum of marginal entropies.
We derive an algorithmic variant of Soft Actor-Critic (SAC) to the mixture policy case and evaluate it on a series of continuous control tasks.
arXiv Detail & Related papers (2021-03-18T11:23:39Z) - Shannon entropy in confined He-like ions within a density functional
formalism [0.0]
Shannon entropy in position ($S_rvec$) and momentum ($S_pvec$) spaces, along with their sum ($S_t$) are presented.
The radial Kohn-Sham equation is solved using an optimal spatial discretization scheme via the generalized pseudospectral (GPS) method.
arXiv Detail & Related papers (2021-02-26T16:29:07Z) - Debiased Sinkhorn barycenters [110.79706180350507]
Entropy regularization in optimal transport (OT) has been the driver of many recent interests for Wasserstein metrics and barycenters in machine learning.
We show how this bias is tightly linked to the reference measure that defines the entropy regularizer.
We propose debiased Wasserstein barycenters that preserve the best of both worlds: fast Sinkhorn-like iterations without entropy smoothing.
arXiv Detail & Related papers (2020-06-03T23:06:02Z) - Generalized Entropy Regularization or: There's Nothing Special about
Label Smoothing [83.78668073898001]
We introduce a family of entropy regularizers, which includes label smoothing as a special case.
We find that variance in model performance can be explained largely by the resulting entropy of the model.
We advise the use of other entropy regularization methods in its place.
arXiv Detail & Related papers (2020-05-02T12:46:28Z) - Anisotropy-mediated reentrant localization [62.997667081978825]
We consider a 2d dipolar system, $d=2$, with the generalized dipole-dipole interaction $sim r-a$, and the power $a$ controlled experimentally in trapped-ion or Rydberg-atom systems.
We show that the spatially homogeneous tilt $beta$ of the dipoles giving rise to the anisotropic dipole exchange leads to the non-trivial reentrant localization beyond the locator expansion.
arXiv Detail & Related papers (2020-01-31T19:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.