Notes on Generalizing the Maximum Entropy Principle to Uncertain Data
- URL: http://arxiv.org/abs/2109.04530v1
- Date: Thu, 9 Sep 2021 19:43:28 GMT
- Title: Notes on Generalizing the Maximum Entropy Principle to Uncertain Data
- Authors: Kenneth Bogert
- Abstract summary: We generalize the principle of maximum entropy for computing a distribution with the least amount of information possible.
We show that our technique generalizes the principle of maximum entropy and latent maximum entropy.
We discuss a generally applicable regularization technique for adding error terms to feature expectation constraints in the event of limited data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The principle of maximum entropy is a broadly applicable technique for
computing a distribution with the least amount of information possible while
commonly constrained to match empirically estimated feature expectations. We
seek to generalize this principle to scenarios where the empirical feature
expectations cannot be computed because the model variables are only partially
observed, which introduces a dependency on the learned model. Extending and
generalizing the principle of latent maximum entropy, we introduce uncertain
maximum entropy and describe an expectation-maximization based solution to
approximately solve these problems. We show that our technique generalizes the
principle of maximum entropy and latent maximum entropy and discuss a generally
applicable regularization technique for adding error terms to feature
expectation constraints in the event of limited data.
Related papers
- Asymptotically Optimal Change Detection for Unnormalized Pre- and Post-Change Distributions [65.38208224389027]
This paper addresses the problem of detecting changes when only unnormalized pre- and post-change distributions are accessible.
Our approach is based on the estimation of the Cumulative Sum statistics, which is known to produce optimal performance.
arXiv Detail & Related papers (2024-10-18T17:13:29Z) - The Limits of Pure Exploration in POMDPs: When the Observation Entropy is Enough [40.82741665804367]
We study a simple approach of maximizing the entropy over observations in place true latent states.
We show how knowledge of the latter can be exploited to compute a regularization of the observation entropy to improve principled performance.
arXiv Detail & Related papers (2024-06-18T17:00:13Z) - The Principle of Uncertain Maximum Entropy [0.0]
We present a new principle we call uncertain maximum entropy that generalizes the classic principle and provides interpretable solutions.
We introduce a convex approximation and expectation-maximization based algorithm for finding solutions to our new principle.
arXiv Detail & Related papers (2023-05-17T00:45:41Z) - IRL with Partial Observations using the Principle of Uncertain Maximum
Entropy [8.296684637620553]
We introduce the principle of uncertain maximum entropy and present an expectation-maximization based solution.
We experimentally demonstrate the improved robustness to noisy data offered by our technique in a maximum causal entropy inverse reinforcement learning domain.
arXiv Detail & Related papers (2022-08-15T03:22:46Z) - Entropy-based Characterization of Modeling Constraints [0.0]
In most data-scientific approaches, the principle of Entropy (MaxEnt) is used to justify some parametric model.
We derive the distribution over all viable distributions that satisfy the provided set of constraints.
The appropriate parametric model which is supported by the data can be always deduced at the end of model selection.
arXiv Detail & Related papers (2022-06-27T17:25:49Z) - Maximum entropy quantum state distributions [58.720142291102135]
We go beyond traditional thermodynamics and condition on the full distribution of the conserved quantities.
The result are quantum state distributions whose deviations from thermal states' get more pronounced in the limit of wide input distributions.
arXiv Detail & Related papers (2022-03-23T17:42:34Z) - Tight Exponential Analysis for Smoothing the Max-Relative Entropy and
for Quantum Privacy Amplification [56.61325554836984]
The max-relative entropy together with its smoothed version is a basic tool in quantum information theory.
We derive the exact exponent for the decay of the small modification of the quantum state in smoothing the max-relative entropy based on purified distance.
arXiv Detail & Related papers (2021-11-01T16:35:41Z) - Maximum Entropy Reinforcement Learning with Mixture Policies [54.291331971813364]
We construct a tractable approximation of the mixture entropy using MaxEnt algorithms.
We show that it is closely related to the sum of marginal entropies.
We derive an algorithmic variant of Soft Actor-Critic (SAC) to the mixture policy case and evaluate it on a series of continuous control tasks.
arXiv Detail & Related papers (2021-03-18T11:23:39Z) - Generalized Maximum Entropy for Supervised Classification [26.53901315716557]
The maximum entropy principle advocates to evaluate events' probabilities using a distribution that maximizes entropy.
This paper establishes a framework for supervised classification based on the generalized maximum entropy principle.
arXiv Detail & Related papers (2020-07-10T15:41:17Z) - A maximum-entropy approach to off-policy evaluation in average-reward
MDPs [54.967872716145656]
This work focuses on off-policy evaluation (OPE) with function approximation in infinite-horizon undiscounted Markov decision processes (MDPs)
We provide the first finite-sample OPE error bound, extending existing results beyond the episodic and discounted cases.
We show that this results in an exponential-family distribution whose sufficient statistics are the features, paralleling maximum-entropy approaches in supervised learning.
arXiv Detail & Related papers (2020-06-17T18:13:37Z) - Polynomial-Time Exact MAP Inference on Discrete Models with Global
Dependencies [83.05591911173332]
junction tree algorithm is the most general solution for exact MAP inference with run-time guarantees.
We propose a new graph transformation technique via node cloning which ensures a run-time for solving our target problem independently of the form of a corresponding clique tree.
arXiv Detail & Related papers (2019-12-27T13:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.