Topological Data Analysis for Neural Network Analysis: A Comprehensive
Survey
- URL: http://arxiv.org/abs/2312.05840v2
- Date: Wed, 3 Jan 2024 15:58:21 GMT
- Title: Topological Data Analysis for Neural Network Analysis: A Comprehensive
Survey
- Authors: Rub\'en Ballester, Carles Casacuberta, Sergio Escalera
- Abstract summary: This survey provides a comprehensive exploration of applications of Topological Data Analysis (TDA) within neural network analysis.
We discuss different strategies to obtain topological information from data and neural networks by means of TDA.
We explore practical implications of deep learning, specifically focusing on areas like adversarial detection and model selection.
- Score: 35.29334376503123
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This survey provides a comprehensive exploration of applications of
Topological Data Analysis (TDA) within neural network analysis. Using TDA tools
such as persistent homology and Mapper, we delve into the intricate structures
and behaviors of neural networks and their datasets. We discuss different
strategies to obtain topological information from data and neural networks by
means of TDA. Additionally, we review how topological information can be
leveraged to analyze properties of neural networks, such as their
generalization capacity or expressivity. We explore practical implications of
deep learning, specifically focusing on areas like adversarial detection and
model selection. Our survey organizes the examined works into four broad
domains: 1. Characterization of neural network architectures; 2. Analysis of
decision regions and boundaries; 3. Study of internal representations,
activations, and parameters; 4. Exploration of training dynamics and loss
functions. Within each category, we discuss several articles, offering
background information to aid in understanding the various methodologies. We
conclude with a synthesis of key insights gained from our study, accompanied by
a discussion of challenges and potential advancements in the field.
Related papers
- Ontology Embedding: A Survey of Methods, Applications and Resources [54.3453925775069]
Ontologies are widely used for representing domain knowledge and meta data.
One straightforward solution is to integrate statistical analysis and machine learning.
Numerous papers have been published on embedding, but a lack of systematic reviews hinders researchers from gaining a comprehensive understanding of this field.
arXiv Detail & Related papers (2024-06-16T14:49:19Z) - Probing Biological and Artificial Neural Networks with Task-dependent
Neural Manifolds [12.037840490243603]
We investigate the internal mechanisms of neural networks through the lens of neural population geometry.
We quantitatively characterize how different learning objectives lead to differences in the organizational strategies of these models.
These analyses present a strong direction for bridging mechanistic and normative theories in neural networks through neural population geometry.
arXiv Detail & Related papers (2023-12-21T20:40:51Z) - Deep neural networks architectures from the perspective of manifold
learning [0.0]
This paper is a comprehensive comparison and description of neural network architectures in terms of ge-ometry and topology.
We focus on the internal representation of neural networks and on the dynamics of changes in the topology and geometry of a data manifold on different layers.
arXiv Detail & Related papers (2023-06-06T04:57:39Z) - Synergistic information supports modality integration and flexible
learning in neural networks solving multiple tasks [107.8565143456161]
We investigate the information processing strategies adopted by simple artificial neural networks performing a variety of cognitive tasks.
Results show that synergy increases as neural networks learn multiple diverse tasks.
randomly turning off neurons during training through dropout increases network redundancy, corresponding to an increase in robustness.
arXiv Detail & Related papers (2022-10-06T15:36:27Z) - Topology and geometry of data manifold in deep learning [0.0]
This article describes and substantiates the geometric and topological view of the learning process of neural networks.
We present a wide range of experiments on different datasets and different configurations of convolutional neural network architectures.
Our work is a contribution to the development of an important area of explainable and interpretable AI through the example of computer vision.
arXiv Detail & Related papers (2022-04-19T02:57:47Z) - Activation Landscapes as a Topological Summary of Neural Network
Performance [0.0]
We study how data transforms as it passes through successive layers of a deep neural network (DNN)
We compute the persistent homology of the activation data for each layer of the network and summarize this information using persistence landscapes.
The resulting feature map provides both an informative visual- ization of the network and a kernel for statistical analysis and machine learning.
arXiv Detail & Related papers (2021-10-19T17:45:36Z) - A Survey of Community Detection Approaches: From Statistical Modeling to
Deep Learning [95.27249880156256]
We develop and present a unified architecture of network community-finding methods.
We introduce a new taxonomy that divides the existing methods into two categories, namely probabilistic graphical model and deep learning.
We conclude with discussions of the challenges of the field and suggestions of possible directions for future research.
arXiv Detail & Related papers (2021-01-03T02:32:45Z) - Topological obstructions in neural networks learning [67.8848058842671]
We study global properties of the loss gradient function flow.
We use topological data analysis of the loss function and its Morse complex to relate local behavior along gradient trajectories with global properties of the loss surface.
arXiv Detail & Related papers (2020-12-31T18:53:25Z) - Inter-layer Information Similarity Assessment of Deep Neural Networks
Via Topological Similarity and Persistence Analysis of Data Neighbour
Dynamics [93.4221402881609]
The quantitative analysis of information structure through a deep neural network (DNN) can unveil new insights into the theoretical performance of DNN architectures.
Inspired by both LS and ID strategies for quantitative information structure analysis, we introduce two novel complimentary methods for inter-layer information similarity assessment.
We demonstrate their efficacy in this study by performing analysis on a deep convolutional neural network architecture on image data.
arXiv Detail & Related papers (2020-12-07T15:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.