Can neural networks learn persistent homology features?
- URL: http://arxiv.org/abs/2011.14688v1
- Date: Mon, 30 Nov 2020 10:58:53 GMT
- Title: Can neural networks learn persistent homology features?
- Authors: Guido Mont\'ufar, Nina Otter, Yuguang Wang
- Abstract summary: Topological data analysis uses tools from topology to create representations of data.
In our work, we explore the possibility of learning several types of features extracted from persistence diagrams using neural networks.
- Score: 1.1816942730023885
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Topological data analysis uses tools from topology -- the mathematical area
that studies shapes -- to create representations of data. In particular, in
persistent homology, one studies one-parameter families of spaces associated
with data, and persistence diagrams describe the lifetime of topological
invariants, such as connected components or holes, across the one-parameter
family. In many applications, one is interested in working with features
associated with persistence diagrams rather than the diagrams themselves. In
our work, we explore the possibility of learning several types of features
extracted from persistence diagrams using neural networks.
Related papers
- A Survey of Geometric Graph Neural Networks: Data Structures, Models and
Applications [67.33002207179923]
This paper presents a survey of data structures, models, and applications related to geometric GNNs.
We provide a unified view of existing models from the geometric message passing perspective.
We also summarize the applications as well as the related datasets to facilitate later research for methodology development and experimental evaluation.
arXiv Detail & Related papers (2024-03-01T12:13:04Z) - Improving embedding of graphs with missing data by soft manifolds [51.425411400683565]
The reliability of graph embeddings depends on how much the geometry of the continuous space matches the graph structure.
We introduce a new class of manifold, named soft manifold, that can solve this situation.
Using soft manifold for graph embedding, we can provide continuous spaces to pursue any task in data analysis over complex datasets.
arXiv Detail & Related papers (2023-11-29T12:48:33Z) - Synthetic Data Generation and Deep Learning for the Topological Analysis
of 3D Data [0.0]
This research uses deep learning to estimate the topology of sparse, unordered point cloud scenes in 3D.
The experimental results of this pilot study support the hypothesis that, with the aid of sophisticated synthetic data generation, neural networks can perform segmentation-based topological data analysis.
arXiv Detail & Related papers (2023-09-29T04:37:35Z) - Persistence-based operators in machine learning [62.997667081978825]
We introduce a class of persistence-based neural network layers.
Persistence-based layers allow the users to easily inject knowledge about symmetries respected by the data, are equipped with learnable weights, and can be composed with state-of-the-art neural architectures.
arXiv Detail & Related papers (2022-12-28T18:03:41Z) - Neural Graphical Models [2.6842860806280058]
We introduce Neural Graphical Models (NGMs) to represent complex feature dependencies with reasonable computational costs.
We capture the dependency structure between the features along with their complex function representations by using a neural network as a multi-task learning framework.
NGMs can fit generic graph structures including directed, undirected and mixed-edge graphs as well as support mixed input data types.
arXiv Detail & Related papers (2022-10-02T07:59:51Z) - Rethinking Persistent Homology for Visual Recognition [27.625893409863295]
This paper performs a detailed analysis of the effectiveness of topological properties for image classification in various training scenarios.
We identify the scenarios that benefit the most from topological features, e.g., training simple networks on small datasets.
arXiv Detail & Related papers (2022-07-09T08:01:11Z) - Hyperbolic Graph Neural Networks: A Review of Methods and Applications [55.5502008501764]
Graph neural networks generalize conventional neural networks to graph-structured data.
The performance of Euclidean models in graph-related learning is still bounded and limited by the representation ability of Euclidean geometry.
Recently, hyperbolic space has gained increasing popularity in processing graph data with tree-like structure and power-law distribution.
arXiv Detail & Related papers (2022-02-28T15:08:48Z) - Self-Supervised Graph Representation Learning for Neuronal Morphologies [75.38832711445421]
We present GraphDINO, a data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled datasets.
We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings on par with manual feature-based classification by experts.
Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets.
arXiv Detail & Related papers (2021-12-23T12:17:47Z) - Smart Vectorizations for Single and Multiparameter Persistence [8.504400925390296]
We introduce two new topological summaries for single and multi-persistence persistence, namely, saw functions and multi-persistence grid functions.
These new topological summaries can be regarded as the complexity measures of the evolving subspaces determined by the filtration.
We derive theoretical guarantees on the stability of the new saw and multi-persistence grid functions and illustrate their applicability for graph classification tasks.
arXiv Detail & Related papers (2021-04-10T15:09:31Z) - Graph Laplacians, Riemannian Manifolds and their Machine-Learning [2.258160413679475]
We apply some of the latest techniques in data science such as supervised and unsupervised machine-learning and topological data analysis to the Wolfram database of some 8000 finite graphs.
We find that neural classifiers, regressors and networks can perform, with high efficiently and accuracy, a multitude of tasks ranging from recognizing graph Ricci-flatness, to predicting the spectral gap, to detecting the presence of Hamiltonian cycles.
arXiv Detail & Related papers (2020-06-30T09:16:56Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.