Non-isotropic Persistent Homology: Leveraging the Metric Dependency of
PH
- URL: http://arxiv.org/abs/2310.16437v1
- Date: Wed, 25 Oct 2023 08:03:17 GMT
- Title: Non-isotropic Persistent Homology: Leveraging the Metric Dependency of
PH
- Authors: Vincent P. Grande and Michael T. Schaub
- Abstract summary: We show that information on the point cloud is lost when restricting persistent homology to a single distance function.
We numerically show that non-isotropic persistent homology can extract information on orientation, orientational variance, and scaling of randomly generated point clouds.
- Score: 5.70896453969985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Persistent Homology is a widely used topological data analysis tool that
creates a concise description of the topological properties of a point cloud
based on a specified filtration. Most filtrations used for persistent homology
depend (implicitly) on a chosen metric, which is typically agnostically chosen
as the standard Euclidean metric on $\mathbb{R}^n$. Recent work has tried to
uncover the 'true' metric on the point cloud using distance-to-measure
functions, in order to obtain more meaningful persistent homology results. Here
we propose an alternative look at this problem: we posit that information on
the point cloud is lost when restricting persistent homology to a single
(correct) distance function. Instead, we show how by varying the distance
function on the underlying space and analysing the corresponding shifts in the
persistence diagrams, we can extract additional topological and geometrical
information. Finally, we numerically show that non-isotropic persistent
homology can extract information on orientation, orientational variance, and
scaling of randomly generated point clouds with good accuracy and conduct some
experiments on real-world data.
Related papers
- Improving embedding of graphs with missing data by soft manifolds [51.425411400683565]
The reliability of graph embeddings depends on how much the geometry of the continuous space matches the graph structure.
We introduce a new class of manifold, named soft manifold, that can solve this situation.
Using soft manifold for graph embedding, we can provide continuous spaces to pursue any task in data analysis over complex datasets.
arXiv Detail & Related papers (2023-11-29T12:48:33Z) - Gradient-Based Feature Learning under Structured Data [57.76552698981579]
In the anisotropic setting, the commonly used spherical gradient dynamics may fail to recover the true direction.
We show that appropriate weight normalization that is reminiscent of batch normalization can alleviate this issue.
In particular, under the spiked model with a suitably large spike, the sample complexity of gradient-based training can be made independent of the information exponent.
arXiv Detail & Related papers (2023-09-07T16:55:50Z) - Adaptive Topological Feature via Persistent Homology: Filtration
Learning for Point Clouds [13.098609653951893]
We propose a framework that learns a filtration adaptively with the use of neural networks.
We show a theoretical result on a finite-dimensional approximation of filtration functions, which justifies the proposed network architecture.
arXiv Detail & Related papers (2023-07-18T13:43:53Z) - Convolutional Filtering on Sampled Manifolds [122.06927400759021]
We show that convolutional filtering on a sampled manifold converges to continuous manifold filtering.
Our findings are further demonstrated empirically on a problem of navigation control.
arXiv Detail & Related papers (2022-11-20T19:09:50Z) - On topological data analysis for SHM; an introduction to persistent
homology [0.0]
The main tool within topological data analysis is persistent homology.
persistent homology is a representation of how the homological features of the data persist over an interval.
These results allow for topological inference and the ability to deduce features in higher-dimensional data.
arXiv Detail & Related papers (2022-09-12T12:02:39Z) - The Manifold Hypothesis for Gradient-Based Explanations [55.01671263121624]
gradient-based explanation algorithms provide perceptually-aligned explanations.
We show that the more a feature attribution is aligned with the tangent space of the data, the more perceptually-aligned it tends to be.
We suggest that explanation algorithms should actively strive to align their explanations with the data manifold.
arXiv Detail & Related papers (2022-06-15T08:49:24Z) - Tight basis cycle representatives for persistent homology of large data
sets [0.0]
Persistent homology (PH) is a popular tool for topological data analysis that has found applications across diverse areas of research.
Although powerful in theory, PH suffers from high computation cost that precludes its application to large data sets.
We provide a strategy and algorithms to compute tight representative boundaries around nontrivial robust features in large data sets.
arXiv Detail & Related papers (2022-06-06T22:00:42Z) - Robust Topological Inference in the Presence of Outliers [18.6112824677157]
The distance function to a compact set plays a crucial role in the paradigm of topological data analysis.
Despite its stability to perturbations in the Hausdorff distance, persistent homology is highly sensitive to outliers.
We propose a $textitmedian-of-means$ variant of the distance function ($textsfMoM Dist$), and establish its statistical properties.
arXiv Detail & Related papers (2022-06-03T19:45:43Z) - Time-inhomogeneous diffusion geometry and topology [69.55228523791897]
Diffusion condensation is a time-inhomogeneous process where each step first computes and then applies a diffusion operator to the data.
We theoretically analyze the convergence and evolution of this process from geometric, spectral, and topological perspectives.
Our work gives theoretical insights into the convergence of diffusion condensation, and shows that it provides a link between topological and geometric data analysis.
arXiv Detail & Related papers (2022-03-28T16:06:17Z) - Quantum Persistent Homology [0.9023847175654603]
Persistent homology is a powerful mathematical tool that summarizes useful information about the shape of data.
We develop an efficient quantum computation of persistent Betti numbers, which track topological features of data across different scales.
Our approach employs a persistent Dirac operator whose square yields the persistent Laplacian, and in turn the underlying persistent Betti numbers.
arXiv Detail & Related papers (2022-02-25T20:52:03Z) - GPCO: An Unsupervised Green Point Cloud Odometry Method [64.86292006892093]
A lightweight point cloud odometry solution is proposed and named the green point cloud odometry (GPCO) method.
GPCO is an unsupervised learning method that predicts object motion by matching features of consecutive point cloud scans.
It is observed that GPCO outperforms benchmarking deep learning methods in accuracy while it has a significantly smaller model size and less training time.
arXiv Detail & Related papers (2021-12-08T00:24:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.