A computationally efficient framework for vector representation of
persistence diagrams
- URL: http://arxiv.org/abs/2109.08239v1
- Date: Thu, 16 Sep 2021 22:02:35 GMT
- Title: A computationally efficient framework for vector representation of
persistence diagrams
- Authors: Kit C. Chan, Umar Islambekov, Alexey Luchinsky, Rebecca Sanders
- Abstract summary: We propose a framework to convert a persistence diagram (PD) into a vector in $mathbbRn$, called a vectorized persistence block (VPB)
Our representation possesses many of the desired properties of vector-based summaries such as stability with respect to input noise, low computational cost and flexibility.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In Topological Data Analysis, a common way of quantifying the shape of data
is to use a persistence diagram (PD). PDs are multisets of points in
$\mathbb{R}^2$ computed using tools of algebraic topology. However, this
multi-set structure limits the utility of PDs in applications. Therefore, in
recent years efforts have been directed towards extracting informative and
efficient summaries from PDs to broaden the scope of their use for machine
learning tasks. We propose a computationally efficient framework to convert a
PD into a vector in $\mathbb{R}^n$, called a vectorized persistence block
(VPB). We show that our representation possesses many of the desired properties
of vector-based summaries such as stability with respect to input noise, low
computational cost and flexibility. Through simulation studies, we demonstrate
the effectiveness of VPBs in terms of performance and computational cost within
various learning tasks, namely clustering, classification and change point
detection.
Related papers
- Efficient Fairness-Performance Pareto Front Computation [51.558848491038916]
We show that optimal fair representations possess several useful structural properties.
We then show that these approxing problems can be solved efficiently via concave programming methods.
arXiv Detail & Related papers (2024-09-26T08:46:48Z) - Chebyshev approximation and composition of functions in matrix product states for quantum-inspired numerical analysis [0.0]
It proposes an algorithm that employs iterative Chebyshev expansions and Clenshaw evaluations to represent analytic and highly differentiable functions as MPS Chebyshev interpolants.
It demonstrates rapid convergence for highly-differentiable functions, aligning with theoretical predictions, and generalizes efficiently to multidimensional scenarios.
arXiv Detail & Related papers (2024-07-12T18:00:06Z) - Self-Supervised Learning with Lie Symmetries for Partial Differential
Equations [25.584036829191902]
We learn general-purpose representations of PDEs by implementing joint embedding methods for self-supervised learning (SSL)
Our representation outperforms baseline approaches to invariant tasks, such as regressing the coefficients of a PDE, while also improving the time-stepping performance of neural solvers.
We hope that our proposed methodology will prove useful in the eventual development of general-purpose foundation models for PDEs.
arXiv Detail & Related papers (2023-07-11T16:52:22Z) - Provably Efficient Representation Learning with Tractable Planning in
Low-Rank POMDP [81.00800920928621]
We study representation learning in partially observable Markov Decision Processes (POMDPs)
We first present an algorithm for decodable POMDPs that combines maximum likelihood estimation (MLE) and optimism in the face of uncertainty (OFU)
We then show how to adapt this algorithm to also work in the broader class of $gamma$-observable POMDPs.
arXiv Detail & Related papers (2023-06-21T16:04:03Z) - A Multi-Resolution Framework for U-Nets with Applications to
Hierarchical VAEs [29.995904718691204]
We formulate a multi-resolution framework which identifies U-Nets as finite-dimensional truncations of models on an infinite-dimensional function space.
We then leverage our framework to identify state-of-the-art hierarchical VAEs (HVAEs) which have a U-Net architecture.
arXiv Detail & Related papers (2023-01-19T17:33:48Z) - Provable General Function Class Representation Learning in Multitask
Bandits and MDPs [58.624124220900306]
multitask representation learning is a popular approach in reinforcement learning to boost the sample efficiency.
In this work, we extend the analysis to general function class representations.
We theoretically validate the benefit of multitask representation learning within general function class for bandits and linear MDP.
arXiv Detail & Related papers (2022-05-31T11:36:42Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - PnP-DETR: Towards Efficient Visual Analysis with Transformers [146.55679348493587]
Recently, DETR pioneered the solution vision tasks with transformers, it directly translates the image feature map into the object result.
Recent transformer-based image recognition model andTT show consistent efficiency gain.
arXiv Detail & Related papers (2021-09-15T01:10:30Z) - The Interconnectivity Vector: A Finite-Dimensional Vector Representation
of Persistent Homology [2.741266294612776]
Persistent Homology (PH) is a useful tool to study the underlying structure of a data set.
Persistence Diagrams (PDs) are a concise summary of the information found by studying the PH of a data set.
We propose a new finite-dimensional vector, called the interconnectivity vector, representation of a PD adapted from Bag-of-Words (BoW)
arXiv Detail & Related papers (2020-11-23T17:43:06Z) - Sample-Efficient Reinforcement Learning of Undercomplete POMDPs [91.40308354344505]
This work shows that these hardness barriers do not preclude efficient reinforcement learning for rich and interesting subclasses of Partially Observable Decision Processes (POMDPs)
We present a sample-efficient algorithm, OOM-UCB, for episodic finite undercomplete POMDPs, where the number of observations is larger than the number of latent states and where exploration is essential for learning, thus distinguishing our results from prior works.
arXiv Detail & Related papers (2020-06-22T17:58:54Z) - Physically interpretable machine learning algorithm on multidimensional
non-linear fields [0.0]
Polynomial Chaos Expansion (PCE) has long been employed as a robust representation for probabilistic input-to-output mapping.
Dimensionality Reduction (DR) techniques are increasingly used for pattern recognition and data compression.
The goal of the present paper was to combine POD and PCE for a field-measurement-based forecasting.
arXiv Detail & Related papers (2020-05-28T11:26:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.