Robust Persistence Diagrams using Reproducing Kernels
- URL: http://arxiv.org/abs/2006.10012v2
- Date: Fri, 3 Jun 2022 19:56:03 GMT
- Title: Robust Persistence Diagrams using Reproducing Kernels
- Authors: Siddharth Vishwanath and Kenji Fukumizu and Satoshi Kuriki and Bharath
Sriperumbudur
- Abstract summary: We develop a framework for constructing robust persistence diagrams from superlevel filtrations of robust density estimators constructed using kernels.
We demonstrate the superiority of the proposed approach on benchmark datasets.
- Score: 15.772439913138161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Persistent homology has become an important tool for extracting geometric and
topological features from data, whose multi-scale features are summarized in a
persistence diagram. From a statistical perspective, however, persistence
diagrams are very sensitive to perturbations in the input space. In this work,
we develop a framework for constructing robust persistence diagrams from
superlevel filtrations of robust density estimators constructed using
reproducing kernels. Using an analogue of the influence function on the space
of persistence diagrams, we establish the proposed framework to be less
sensitive to outliers. The robust persistence diagrams are shown to be
consistent estimators in bottleneck distance, with the convergence rate
controlled by the smoothness of the kernel. This, in turn, allows us to
construct uniform confidence bands in the space of persistence diagrams.
Finally, we demonstrate the superiority of the proposed approach on benchmark
datasets.
Related papers
- Deep End-to-End Survival Analysis with Temporal Consistency [49.77103348208835]
We present a novel Survival Analysis algorithm designed to efficiently handle large-scale longitudinal data.
A central idea in our method is temporal consistency, a hypothesis that past and future outcomes in the data evolve smoothly over time.
Our framework uniquely incorporates temporal consistency into large datasets by providing a stable training signal.
arXiv Detail & Related papers (2024-10-09T11:37:09Z) - Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence [92.07601770031236]
We investigate semantically meaningful patterns in the attention heads of an encoder-only Transformer architecture.
We find that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization.
arXiv Detail & Related papers (2024-09-20T07:41:47Z) - Regulating Model Reliance on Non-Robust Features by Smoothing Input Marginal Density [93.32594873253534]
Trustworthy machine learning requires meticulous regulation of model reliance on non-robust features.
We propose a framework to delineate and regulate such features by attributing model predictions to the input.
arXiv Detail & Related papers (2024-07-05T09:16:56Z) - Robust Graph Neural Networks via Unbiased Aggregation [20.40814320483077]
adversarial robustness of Graph Neural Networks (GNNs) has been questioned due to the false sense of security uncovered by strong adaptive attacks.
We provide a unified robust estimation point of view to understand their robustness and limitations.
arXiv Detail & Related papers (2023-11-25T05:34:36Z) - $k$-Means Clustering for Persistent Homology [0.0]
We prove convergence of the $k$-means clustering algorithm on persistence diagram space.
We also establish theoretical properties of the solution to the optimization problem in the Karush--Kuhn--Tucker framework.
arXiv Detail & Related papers (2022-10-18T17:18:51Z) - Robust Topological Inference in the Presence of Outliers [18.6112824677157]
The distance function to a compact set plays a crucial role in the paradigm of topological data analysis.
Despite its stability to perturbations in the Hausdorff distance, persistent homology is highly sensitive to outliers.
We propose a $textitmedian-of-means$ variant of the distance function ($textsfMoM Dist$), and establish its statistical properties.
arXiv Detail & Related papers (2022-06-03T19:45:43Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - Stability of Neural Networks on Manifolds to Relative Perturbations [118.84154142918214]
Graph Neural Networks (GNNs) show impressive performance in many practical scenarios.
GNNs can scale well on large size graphs, but this is contradicted by the fact that existing stability bounds grow with the number of nodes.
arXiv Detail & Related papers (2021-10-10T04:37:19Z) - A Domain-Oblivious Approach for Learning Concise Representations of
Filtered Topological Spaces [7.717214217542406]
We propose a persistence diagram hashing framework that learns a binary code representation of persistence diagrams.
This framework is built upon a generative adversarial network (GAN) with a diagram distance loss function to steer the learning process.
Our proposed method is directly applicable to various datasets without the need of retraining the model.
arXiv Detail & Related papers (2021-05-25T20:44:28Z) - A Fast and Robust Method for Global Topological Functional Optimization [70.11080854486953]
We introduce a novel backpropagation scheme that is significantly faster, more stable, and produces more robust optima.
This scheme can also be used to produce a stable visualization of dots in a persistence diagram as a distribution over critical, and near-critical, simplices in the data structure.
arXiv Detail & Related papers (2020-09-17T18:46:16Z) - Understanding the Power of Persistence Pairing via Permutation Test [13.008323851750442]
We carry out a range of experiments on both graph data and shape data, aiming to decouple and inspect the effects of different factors involved.
For graph classification tasks, we note that while persistence pairing yields consistent improvement over various benchmark datasets, most discriminative power comes from critical values.
For shape segmentation and classification, however, we note that persistence pairing shows significant power on most of the benchmark datasets.
arXiv Detail & Related papers (2020-01-16T20:13:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.