Comparing Labeled Markov Chains: A Cantor-Kantorovich Approach
- URL: http://arxiv.org/abs/2511.18103v1
- Date: Sat, 22 Nov 2025 16:02:56 GMT
- Title: Comparing Labeled Markov Chains: A Cantor-Kantorovich Approach
- Authors: Adrien Banse, Alessandro Abate, Raphaƫl M. Jungers,
- Abstract summary: We study the recently introduced Cantor-Kantorovich (or CK) distance.<n>In particular we show that the latter can be framed as a discounted sum of finite-horizon Total Variation distances.<n>More precisely, we show that the exact computation of the CK distance is #P-hard.
- Score: 53.66196601631798
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Labeled Markov Chains (or LMCs for short) are useful mathematical objects to model complex probabilistic languages. A central challenge is to compare two LMCs, for example to assess the accuracy of an abstraction or to quantify the effect of model perturbations. In this work, we study the recently introduced Cantor-Kantorovich (or CK) distance. In particular we show that the latter can be framed as a discounted sum of finite-horizon Total Variation distances, making it an instance of discounted linear distance, but arising from the natural Cantor topology. Building on the latter observation, we analyze the properties of the CK distance along three dimensions: computational complexity, continuity properties and approximation. More precisely, we show that the exact computation of the CK distance is #P-hard. We also provide an upper bound on the CK distance as a function of the approximation relation between the two LMCs, and show that a bounded CK distance implies a bounded error between probabilities of finite-horizon traces. Finally, we provide a computable approximation scheme, and show that the latter is also #P-hard. Altogether, our results provide a rigorous theoretical foundation for the CK distance and clarify its relationship with existing distances.
Related papers
- Kernel Trace Distance: Quantum Statistical Metric between Measures through RKHS Density Operators [11.899035547580201]
We introduce a novel distance between measures that compares them through a Schatten norm of their kernel covariance operators.<n>We show that this new distance is an integral probability metric that can be framed between a Maximum Mean Discrepancy (MMD) and a Wasserstein distance.
arXiv Detail & Related papers (2025-07-08T14:56:44Z) - Feature Subset Weighting for Distance-based Supervised Learning through Choquet Integration [2.1943338072179444]
This paper introduces feature subset weighting using monotone measures for distance-based supervised learning.<n>The Choquet integral is used to define a distance metric that incorporates these weights.<n>We show how this approach ensures that the distances remain unaffected by the addition of duplicate and strongly correlated features.
arXiv Detail & Related papers (2025-04-01T10:23:01Z) - A distance function for stochastic matrices [0.9217021281095907]
Starting with sequences of Markov chains the Bhattacharyya angle is advocated as the natural tool for comparing both short and long term Markov chain runs.<n>It is shown that considering either the Bhattacharyya angle on Markov sequences or the new matrix distance leads to the same distance between models.
arXiv Detail & Related papers (2024-10-16T15:49:25Z) - KPZ scaling from the Krylov space [83.88591755871734]
Recently, a superdiffusion exhibiting the Kardar-Parisi-Zhang scaling in late-time correlators and autocorrelators has been reported.
Inspired by these results, we explore the KPZ scaling in correlation functions using their realization in the Krylov operator basis.
arXiv Detail & Related papers (2024-06-04T20:57:59Z) - Computing distances and means on manifolds with a metric-constrained Eikonal approach [4.266376725904727]
We introduce the metric-constrained Eikonal solver to obtain continuous, differentiable representations of distance functions.
The differentiable nature of these representations allows for the direct computation of globally length-minimising paths on the manifold.
arXiv Detail & Related papers (2024-04-12T18:26:32Z) - Markovian Sliced Wasserstein Distances: Beyond Independent Projections [51.80527230603978]
We introduce a new family of SW distances, named Markovian sliced Wasserstein (MSW) distance, which imposes a first-order Markov structure on projecting directions.
We compare distances with previous SW variants in various applications such as flows, color transfer, and deep generative modeling to demonstrate the favorable performance of MSW.
arXiv Detail & Related papers (2023-01-10T01:58:15Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - Tangent Space and Dimension Estimation with the Wasserstein Distance [10.118241139691952]
Consider a set of points sampled independently near a smooth compact submanifold of Euclidean space.
We provide mathematically rigorous bounds on the number of sample points required to estimate both the dimension and the tangent spaces of that manifold.
arXiv Detail & Related papers (2021-10-12T21:02:06Z) - Kernel distance measures for time series, random fields and other
structured data [71.61147615789537]
kdiff is a novel kernel-based measure for estimating distances between instances of structured data.
It accounts for both self and cross similarities across the instances and is defined using a lower quantile of the distance distribution.
Some theoretical results are provided for separability conditions using kdiff as a distance measure for clustering and classification problems.
arXiv Detail & Related papers (2021-09-29T22:54:17Z) - $\gamma$-ABC: Outlier-Robust Approximate Bayesian Computation Based on a
Robust Divergence Estimator [95.71091446753414]
We propose to use a nearest-neighbor-based $gamma$-divergence estimator as a data discrepancy measure.
Our method achieves significantly higher robustness than existing discrepancy measures.
arXiv Detail & Related papers (2020-06-13T06:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.