Limit of the Maximum Random Permutation Set Entropy
- URL: http://arxiv.org/abs/2403.06206v1
- Date: Sun, 10 Mar 2024 13:04:09 GMT
- Title: Limit of the Maximum Random Permutation Set Entropy
- Authors: Jiefeng Zhou, Zhen Li, Kang Hao Cheong, Yong Deng
- Abstract summary: The entropy of Random Permutation Set (RPS) and its corresponding maximum entropy have been proposed.
A new concept, the envelope of entropy function, is defined.
numerical examples validate the efficiency and conciseness of the proposed envelope.
- Score: 16.83953425640319
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Random Permutation Set (RPS) is a new type of set proposed recently,
which can be regarded as the generalization of evidence theory. To measure the
uncertainty of RPS, the entropy of RPS and its corresponding maximum entropy
have been proposed. Exploring the maximum entropy provides a possible way of
understanding the physical meaning of RPS. In this paper, a new concept, the
envelope of entropy function, is defined. In addition, the limit of the
envelope of RPS entropy is derived and proved. Compared with the existing
method, the computational complexity of the proposed method to calculate the
envelope of RPS entropy decreases greatly. The result shows that when $N \to
\infty$, the limit form of the envelope of the entropy of RPS converges to $e
\times (N!)^2$, which is highly connected to the constant $e$ and factorial.
Finally, numerical examples validate the efficiency and conciseness of the
proposed envelope, which provides a new insight into the maximum entropy
function.
Related papers
- The Entropy Mechanism of Reinforcement Learning for Reasoning Language Models [99.98293908799731]
This paper aims to overcome a major obstacle in scaling RL for reasoning with LLMs, namely the collapse of policy entropy.<n>In practice, we establish a transformation equation R=-a*eH+b between entropy H and downstream performance R.<n>We propose two simple yet effective techniques, namely Clip-Cov and KL-Cov, which clip and apply KL penalty to tokens with high covariances respectively.
arXiv Detail & Related papers (2025-05-28T17:38:45Z) - On the Minimax Regret of Sequential Probability Assignment via Square-Root Entropy [70.10668953625247]
We show that the minimax regret for the case of no side information can be upper bounded in terms of sequential square-root entropy.
For the problem of sequential probability assignment with side information, we develop both upper and lower bounds based on the aforementioned entropy.
arXiv Detail & Related papers (2025-03-22T17:26:34Z) - Susceptibility of entanglement entropy: a universal indicator of quantum criticality [7.342677574855651]
A measure of how sensitive the entanglement entropy is in a quantum system, has been proposed and its information origin is discussed.
It has been demonstrated for two exactly solvable spin systems, that thermodynamic criticality is directly textitindicated by finite size scaling of the global maxima.
arXiv Detail & Related papers (2024-12-03T08:04:58Z) - Bounding the conditional von-Neumann entropy for device independent cryptography and randomness extraction [0.0]
This paper introduces a numerical framework for establishing lower bounds on the conditional von-Neumann entropy in device-independent quantum cryptography and randomness extraction scenarios.
The framework offers an adaptable tool for practical quantum cryptographic protocols, expanding secure communication in untrusted environments.
arXiv Detail & Related papers (2024-11-07T16:48:49Z) - Generalized quantum asymptotic equipartition [11.59751616011475]
AEP states that in the limit of a large number of independent and identically distributed (i.i.d.) random experiments, the output sequence is virtually certain to come from the typical set.<n>We prove a generalized quantum AEP beyond the i.i.d. framework where the random samples are drawn from two sets of quantum states.<n>We propose a new framework for quantum resource theory in which state transformations are performed without requiring precise characterization of the states being manipulated.
arXiv Detail & Related papers (2024-11-06T16:33:16Z) - Generalized Rényi entropy accumulation theorem and generalized quantum probability estimation [0.0]
entropy accumulation theorem is a powerful tool in the security analysis of many device-dependent and device-independent cryptography protocols.
It relies on the construction of an affine min-tradeoff function, which can often be challenging to construct optimally in practice.
We deriving a new entropy accumulation bound, which yields significantly better finite-size performance.
arXiv Detail & Related papers (2024-05-09T17:11:00Z) - Fast Rates for Maximum Entropy Exploration [52.946307632704645]
We address the challenge of exploration in reinforcement learning (RL) when the agent operates in an unknown environment with sparse or no rewards.
We study the maximum entropy exploration problem two different types.
For visitation entropy, we propose a game-theoretic algorithm that has $widetildemathcalO(H3S2A/varepsilon2)$ sample complexity.
For the trajectory entropy, we propose a simple algorithm that has a sample of complexity of order $widetildemathcalO(mathrmpoly(S,
arXiv Detail & Related papers (2023-03-14T16:51:14Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Entropic Herding [3.039568795810294]
Herding is a deterministic algorithm used to generate data points that can be regarded as random samples.
We propose an extension of the herding algorithm, called entropic herding, which generates a sequence of distributions instead of points.
arXiv Detail & Related papers (2021-12-22T01:47:21Z) - Maximum Entropy of Random Permutation Set [9.327920030279586]
A new type of set, named as random permutation set (RPS), is proposed by considering all the permutations of elements in a certain set.
For measuring the uncertainty of RPS, the entropy of RPS is presented.
The maximum entropy principle of RPS entropy has not been discussed.
arXiv Detail & Related papers (2021-12-16T12:44:21Z) - Tight Exponential Analysis for Smoothing the Max-Relative Entropy and
for Quantum Privacy Amplification [56.61325554836984]
The max-relative entropy together with its smoothed version is a basic tool in quantum information theory.
We derive the exact exponent for the decay of the small modification of the quantum state in smoothing the max-relative entropy based on purified distance.
arXiv Detail & Related papers (2021-11-01T16:35:41Z) - Maximum Entropy Reinforcement Learning with Mixture Policies [54.291331971813364]
We construct a tractable approximation of the mixture entropy using MaxEnt algorithms.
We show that it is closely related to the sum of marginal entropies.
We derive an algorithmic variant of Soft Actor-Critic (SAC) to the mixture policy case and evaluate it on a series of continuous control tasks.
arXiv Detail & Related papers (2021-03-18T11:23:39Z) - Action Redundancy in Reinforcement Learning [54.291331971813364]
We show that transition entropy can be described by two terms; namely, model-dependent transition entropy and action redundancy.
Our results suggest that action redundancy is a fundamental problem in reinforcement learning.
arXiv Detail & Related papers (2021-02-22T19:47:26Z) - Information bound for entropy production from the detailed fluctuation
theorem [0.0]
We show that entropy produced by heat transfer using a bosonic mode at weak coupling reproduces the maximal distribution in a limiting case.
A composition of qubit swap engines satisfies a particular case of the maximal distribution regardless of its size.
arXiv Detail & Related papers (2020-10-09T22:42:35Z) - Profile Entropy: A Fundamental Measure for the Learnability and
Compressibility of Discrete Distributions [63.60499266361255]
We show that for samples of discrete distributions, profile entropy is a fundamental measure unifying the concepts of estimation, inference, and compression.
Specifically, profile entropy a) determines the speed of estimating the distribution relative to the best natural estimator; b) characterizes the rate of inferring all symmetric properties compared with the best estimator over any label-invariant distribution collection; c) serves as the limit of profile compression.
arXiv Detail & Related papers (2020-02-26T17:49:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.