Rethinking Persistent Homology for Visual Recognition
- URL: http://arxiv.org/abs/2207.04220v1
- Date: Sat, 9 Jul 2022 08:01:11 GMT
- Title: Rethinking Persistent Homology for Visual Recognition
- Authors: Ekaterina Khramtsova, Guido Zuccon, Xi Wang, Mahsa Baktashmotlagh
- Abstract summary: This paper performs a detailed analysis of the effectiveness of topological properties for image classification in various training scenarios.
We identify the scenarios that benefit the most from topological features, e.g., training simple networks on small datasets.
- Score: 27.625893409863295
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Persistent topological properties of an image serve as an additional
descriptor providing an insight that might not be discovered by traditional
neural networks. The existing research in this area focuses primarily on
efficiently integrating topological properties of the data in the learning
process in order to enhance the performance. However, there is no existing
study to demonstrate all possible scenarios where introducing topological
properties can boost or harm the performance. This paper performs a detailed
analysis of the effectiveness of topological properties for image
classification in various training scenarios, defined by: the number of
training samples, the complexity of the training data and the complexity of the
backbone network. We identify the scenarios that benefit the most from
topological features, e.g., training simple networks on small datasets.
Additionally, we discuss the problem of topological consistency of the datasets
which is one of the major bottlenecks for using topological features for
classification. We further demonstrate how the topological inconsistency can
harm the performance for certain scenarios.
Related papers
- Topograph: An efficient Graph-Based Framework for Strictly Topology Preserving Image Segmentation [78.54656076915565]
Topological correctness plays a critical role in many image segmentation tasks.
Most networks are trained using pixel-wise loss functions, such as Dice, neglecting topological accuracy.
We propose a novel, graph-based framework for topologically accurate image segmentation.
arXiv Detail & Related papers (2024-11-05T16:20:14Z) - Topological Learning in Multi-Class Data Sets [0.3050152425444477]
We study the impact of topological complexity on learning in feedforward deep neural networks (DNNs)
We evaluate our topological classification algorithm on multiple constructed and open source data sets.
arXiv Detail & Related papers (2023-01-23T21:54:25Z) - Do Neural Networks Trained with Topological Features Learn Different
Internal Representations? [1.418465438044804]
We investigate whether a model trained with topological features learns internal representations of data that are fundamentally different than those learned by a model trained with the original raw data.
We find that structurally, the hidden representations of models trained and evaluated on topological features differ substantially compared to those trained and evaluated on the corresponding raw data.
We conjecture that this means that neural networks trained on raw data may extract some limited topological features in the process of making predictions.
arXiv Detail & Related papers (2022-11-14T19:19:04Z) - Topological Data Analysis of Neural Network Layer Representations [0.0]
topological features of a simple feedforward neural network's layer representations of a modified torus with a Klein bottle-like twist were computed.
The resulting noise hampered the ability of persistent homology to compute these features.
arXiv Detail & Related papers (2022-07-01T00:51:19Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Activation Landscapes as a Topological Summary of Neural Network
Performance [0.0]
We study how data transforms as it passes through successive layers of a deep neural network (DNN)
We compute the persistent homology of the activation data for each layer of the network and summarize this information using persistence landscapes.
The resulting feature map provides both an informative visual- ization of the network and a kernel for statistical analysis and machine learning.
arXiv Detail & Related papers (2021-10-19T17:45:36Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - CDEvalSumm: An Empirical Study of Cross-Dataset Evaluation for Neural
Summarization Systems [121.78477833009671]
We investigate the performance of different summarization models under a cross-dataset setting.
A comprehensive study of 11 representative summarization systems on 5 datasets from different domains reveals the effect of model architectures and generation ways.
arXiv Detail & Related papers (2020-10-11T02:19:15Z) - A Topological Framework for Deep Learning [0.7310043452300736]
We show that the classification problem in machine learning is always solvable under very mild conditions.
In particular, we show that a softmax classification network acts on an input topological space by a finite sequence of topological moves to achieve the classification task.
arXiv Detail & Related papers (2020-08-31T15:56:42Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.