An Information-Theoretic Approach for Automatically Determining the
Number of States when Aggregating Markov Chains
- URL: http://arxiv.org/abs/2107.01799v1
- Date: Mon, 5 Jul 2021 05:36:04 GMT
- Title: An Information-Theoretic Approach for Automatically Determining the
Number of States when Aggregating Markov Chains
- Authors: Isaac J. Sledge and Jose C. Principe
- Abstract summary: We show that an augmented value-of-information-based approach to aggregating Markov chains facilitates the determination of the number of state groups.
The optimal state-group count coincides with the case where the complexity of the reduced-order chain is balanced against the mutual dependence between the original- and reduced-order chain dynamics.
- Score: 12.716429755564821
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A fundamental problem when aggregating Markov chains is the specification of
the number of state groups. Too few state groups may fail to sufficiently
capture the pertinent dynamics of the original, high-order Markov chain. Too
many state groups may lead to a non-parsimonious, reduced-order Markov chain
whose complexity rivals that of the original. In this paper, we show that an
augmented value-of-information-based approach to aggregating Markov chains
facilitates the determination of the number of state groups. The optimal
state-group count coincides with the case where the complexity of the
reduced-order chain is balanced against the mutual dependence between the
original- and reduced-order chain dynamics.
Related papers
- Ito Diffusion Approximation of Universal Ito Chains for Sampling, Optimization and Boosting [64.0722630873758]
We consider rather general and broad class of Markov chains, Ito chains, that look like Euler-Maryama discretization of some Differential Equation.
We prove the bound in $W_2$-distance between the laws of our Ito chain and differential equation.
arXiv Detail & Related papers (2023-10-09T18:38:56Z) - Generative Flow Networks: a Markov Chain Perspective [93.9910025411313]
We propose a new perspective for GFlowNets using Markov chains, showing a unifying view for GFlowNets regardless of the nature of the state space.
Positioning GFlowNets under the same theoretical framework as MCMC methods also allows us to identify the similarities between both frameworks.
arXiv Detail & Related papers (2023-07-04T01:28:02Z) - A Geometric Reduction Approach for Identity Testing of Reversible Markov
Chains [25.33133112984769]
We consider the problem of testing the identity of a reversible Markov chain against a reference from a single trajectory of observations.
We show that, at least in a mildly restricted setting, testing identity to a reversible chain reduces to testing to a symmetric chain over a larger state space.
arXiv Detail & Related papers (2023-02-16T03:41:39Z) - Offline Estimation of Controlled Markov Chains: Minimaxity and Sample
Complexity [8.732260277121547]
We develop sample complexity bounds for the estimator and establish conditions for minimaxity.
We show that achieving a particular statistical risk bound involves a subtle and interesting trade-off between the strength of the mixing properties and the number of samples.
arXiv Detail & Related papers (2022-11-14T03:39:59Z) - Enabling Quantum Speedup of Markov Chains using a Multi-level Approach [0.0]
Quantum speedup for mixing a Markov chain can be achieved based on the construction of slowly-varying $r$ Markov chains.
We show that the density function of a low-resolution Markov chain can be used to warm start the Markov chain with high resolution.
arXiv Detail & Related papers (2022-10-25T15:17:52Z) - A Three-phase Augmented Classifiers Chain Approach Based on
Co-occurrence Analysis for Multi-Label Classification [0.0]
existing Chains methods are difficult to model and exploit the underlying dependency in the label space.
We present a three-phase augmented Chain approach based on co-occurrence analysis for multilabel classification.
arXiv Detail & Related papers (2022-04-13T02:10:14Z) - Optimal Clustering with Bandit Feedback [57.672609011609886]
This paper considers the problem of online clustering with bandit feedback.
It includes a novel stopping rule for sequential testing that circumvents the need to solve any NP-hard weighted clustering problem as its subroutines.
We show through extensive simulations on synthetic and real-world datasets that BOC's performance matches the lower boundally, and significantly outperforms a non-adaptive baseline algorithm.
arXiv Detail & Related papers (2022-02-09T06:05:05Z) - Semi-Supervised Clustering via Markov Chain Aggregation [9.475039534437332]
We introduce Constrained Markov Clustering (CoMaC) for semi-supervised clustering.
Our results indicate that CoMaC is competitive with the state-of-the-art.
arXiv Detail & Related papers (2021-12-17T09:07:43Z) - Spectrum of localized states in fermionic chains with defect and
adiabatic charge pumping [68.8204255655161]
We study the localized states of a generic quadratic fermionic chain with finite-range couplings.
We analyze the robustness of the connection between bands against perturbations of the Hamiltonian.
arXiv Detail & Related papers (2021-07-20T18:44:06Z) - Pretty good quantum state transfer on isotropic and anisotropic
Heisenberg spin chains with tailored site dependent exchange couplings [68.8204255655161]
We consider chains with isotropic and anisotropic Heisenberg Hamiltonian with up to 100 spins.
We consider short transferred times, in particular shorter than those achievable with known time-dependent control schemes.
arXiv Detail & Related papers (2021-01-08T19:32:10Z) - Non-equilibrium non-Markovian steady-states in open quantum many-body
systems: Persistent oscillations in Heisenberg quantum spin chains [68.8204255655161]
We investigate the effect of a non-Markovian, structured reservoir on an open Heisenberg spin chain.
We establish a coherent self-feedback mechanism as the reservoir couples frequency-dependent to the spin chain.
arXiv Detail & Related papers (2020-06-05T09:16:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.