Convolutional Persistence Transforms
- URL: http://arxiv.org/abs/2208.02107v2
- Date: Thu, 25 Jan 2024 09:38:55 GMT
- Title: Convolutional Persistence Transforms
- Authors: Elchanan Solomon, Paul Bendich
- Abstract summary: We consider featurizations of data defined over simplicial complexes, like images and labeled graphs.
The persistence diagram of the resulting convolution describes the way the motif is distributed across the simplicial complex.
- Score: 0.6526824510982802
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this paper, we consider topological featurizations of data defined over
simplicial complexes, like images and labeled graphs, obtained by convolving
this data with various filters before computing persistence. Viewing a
convolution filter as a local motif, the persistence diagram of the resulting
convolution describes the way the motif is distributed across the simplicial
complex. This pipeline, which we call convolutional persistence, extends the
capacity of topology to observe patterns in such data. Moreover, we prove that
(generically speaking) for any two labeled complexes one can find some filter
for which they produce different persistence diagrams, so that the collection
of all possible convolutional persistence diagrams is an injective invariant.
This is proven by showing convolutional persistence to be a special case of
another topological invariant, the Persistent Homology Transform. Other
advantages of convolutional persistence are improved stability, greater
flexibility for data-dependent vectorizations, and reduced computational
complexity for certain data types. Additionally, we have a suite of experiments
showing that convolutions greatly improve the predictive power of persistence
on a host of classification tasks, even if one uses random filters and
vectorizes the resulting diagrams by recording only their total persistences.
Related papers
- Unsupervised Representation Learning from Sparse Transformation Analysis [79.94858534887801]
We propose to learn representations from sequence data by factorizing the transformations of the latent variables into sparse components.
Input data are first encoded as distributions of latent activations and subsequently transformed using a probability flow model.
arXiv Detail & Related papers (2024-10-07T23:53:25Z) - Gradient-Based Feature Learning under Structured Data [57.76552698981579]
In the anisotropic setting, the commonly used spherical gradient dynamics may fail to recover the true direction.
We show that appropriate weight normalization that is reminiscent of batch normalization can alleviate this issue.
In particular, under the spiked model with a suitably large spike, the sample complexity of gradient-based training can be made independent of the information exponent.
arXiv Detail & Related papers (2023-09-07T16:55:50Z) - ChiroDiff: Modelling chirographic data with Diffusion Models [132.5223191478268]
We introduce a powerful model-class namely "Denoising Diffusion Probabilistic Models" or DDPMs for chirographic data.
Our model named "ChiroDiff", being non-autoregressive, learns to capture holistic concepts and therefore remains resilient to higher temporal sampling rate.
arXiv Detail & Related papers (2023-04-07T15:17:48Z) - DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained
Diffusion [66.21290235237808]
We introduce an energy constrained diffusion model which encodes a batch of instances from a dataset into evolutionary states.
We provide rigorous theory that implies closed-form optimal estimates for the pairwise diffusion strength among arbitrary instance pairs.
Experiments highlight the wide applicability of our model as a general-purpose encoder backbone with superior performance in various tasks.
arXiv Detail & Related papers (2023-01-23T15:18:54Z) - Convolutional Filtering on Sampled Manifolds [122.06927400759021]
We show that convolutional filtering on a sampled manifold converges to continuous manifold filtering.
Our findings are further demonstrated empirically on a problem of navigation control.
arXiv Detail & Related papers (2022-11-20T19:09:50Z) - Approximating Persistent Homology for Large Datasets [0.0]
Persistent homology produces a statistical summary in the form of a persistence diagram.
Despite its widespread use, persistent homology is simply impossible to implement when a dataset is very large.
We show that the mean of the persistence diagrams of subsamples is a valid approximation of the true persistent homology of the larger dataset.
arXiv Detail & Related papers (2022-04-19T23:07:27Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.