Shannon Entropy Rate of Hidden Markov Processes
- URL: http://arxiv.org/abs/2008.12886v1
- Date: Sat, 29 Aug 2020 00:48:17 GMT
- Title: Shannon Entropy Rate of Hidden Markov Processes
- Authors: Alexandra M. Jurgens and James P. Crutchfield
- Abstract summary: We show how to calculate entropy rates for hidden Markov chains.
We also show how this method gives the minimal set of infinite predictive features.
A sequel addresses the challenge's second part on structure.
- Score: 77.34726150561087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hidden Markov chains are widely applied statistical models of stochastic
processes, from fundamental physics and chemistry to finance, health, and
artificial intelligence. The hidden Markov processes they generate are
notoriously complicated, however, even if the chain is finite state: no finite
expression for their Shannon entropy rate exists, as the set of their
predictive features is generically infinite. As such, to date one cannot make
general statements about how random they are nor how structured. Here, we
address the first part of this challenge by showing how to efficiently and
accurately calculate their entropy rates. We also show how this method gives
the minimal set of infinite predictive features. A sequel addresses the
challenge's second part on structure.
Related papers
- Entropy augmentation through subadditive excess: information theory in irreversible processes [0.0]
The Boltzmann equation seems unique in its capacity to accurately describe the transition from almost any initial state to a self-equilibrated thermal state.
An increase of the Gibbs-Shannon-von entropy results without the usual coarse-graining.
The mathematical structure of the ansatz also provides avenues for efficient computation and simulation.
arXiv Detail & Related papers (2024-07-24T14:47:14Z) - Patterns in the jump-channel statistics of open quantum systems [0.0]
A continuously measured quantum system with multiple jump channels gives rise to a process described by random jump times and random emitted symbols.
We provide a full characterization of the resulting process, including efficient ways of simulating it, as well as determining the underlying memory structure.
We show how to unveil patterns in the evolution: Some systems support closed patterns, wherein the evolution runs over a finite set of states, or at least recurring states.
arXiv Detail & Related papers (2023-05-13T16:13:50Z) - On the Algorithmic Information Between Probabilities [6.5268245109828005]
We extend algorithmic conservation inequalities to probability measures.
The amount of self information of a probability measure cannot increase when submitted to randomized processing.
We show that given a quantum measurement, for an overwhelming majority of pure states, no meaningful information is produced.
arXiv Detail & Related papers (2023-03-13T17:20:27Z) - PAPAL: A Provable PArticle-based Primal-Dual ALgorithm for Mixed Nash Equilibrium [58.26573117273626]
We consider the non-AL equilibrium nonconptotic objective function in two-player zero-sum continuous games.
Our novel insights into the particle-based algorithms for continuous distribution strategies are presented.
arXiv Detail & Related papers (2023-03-02T05:08:15Z) - Information Theory Inspired Pattern Analysis for Time-series Data [60.86880787242563]
We propose a highly generalizable method that uses information theory-based features to identify and learn from patterns in time-series data.
For applications with state transitions, features are developed based on Shannon's entropy of Markov chains, entropy rates of Markov chains, and von Neumann entropy of Markov chains.
The results show the proposed information theory-based features improve the recall rate, F1 score, and accuracy on average by up to 23.01% compared with the baseline models.
arXiv Detail & Related papers (2023-02-22T21:09:35Z) - A Simplistic Model of Neural Scaling Laws: Multiperiodic Santa Fe
Processes [0.0]
It was observed that large language models exhibit a power-law decay of cross entropy with respect to the number of parameters and training tokens.
When extrapolated literally, this decay implies that the entropy rate of natural language is zero.
We construct a simple stationary process and its memory-based predictor that exhibit a power-law decay of cross entropy with the vanishing entropy rate.
arXiv Detail & Related papers (2023-02-17T18:27:27Z) - Statistical Properties of the Entropy from Ordinal Patterns [55.551675080361335]
Knowing the joint distribution of the pair Entropy-Statistical Complexity for a large class of time series models would allow statistical tests that are unavailable to date.
We characterize the distribution of the empirical Shannon's Entropy for any model under which the true normalized Entropy is neither zero nor one.
We present a bilateral test that verifies if there is enough evidence to reject the hypothesis that two signals produce ordinal patterns with the same Shannon's Entropy.
arXiv Detail & Related papers (2022-09-15T23:55:58Z) - Action Redundancy in Reinforcement Learning [54.291331971813364]
We show that transition entropy can be described by two terms; namely, model-dependent transition entropy and action redundancy.
Our results suggest that action redundancy is a fundamental problem in reinforcement learning.
arXiv Detail & Related papers (2021-02-22T19:47:26Z) - Entropy and reversible catalysis [0.0]
I show that non-decreasing entropy provides a necessary and sufficient condition to convert the state of a physical system into a different state.
I show how they can be used to obtain a quantitative single-shot characterization of Gibbs states in quantum statistical mechanics.
arXiv Detail & Related papers (2020-12-10T10:42:44Z) - Generalized Entropy Regularization or: There's Nothing Special about
Label Smoothing [83.78668073898001]
We introduce a family of entropy regularizers, which includes label smoothing as a special case.
We find that variance in model performance can be explained largely by the resulting entropy of the model.
We advise the use of other entropy regularization methods in its place.
arXiv Detail & Related papers (2020-05-02T12:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.