Activation Landscapes as a Topological Summary of Neural Network
Performance
- URL: http://arxiv.org/abs/2110.10136v1
- Date: Tue, 19 Oct 2021 17:45:36 GMT
- Title: Activation Landscapes as a Topological Summary of Neural Network
Performance
- Authors: Matthew Wheeler, Jose Bouza, Peter Bubenik
- Abstract summary: We study how data transforms as it passes through successive layers of a deep neural network (DNN)
We compute the persistent homology of the activation data for each layer of the network and summarize this information using persistence landscapes.
The resulting feature map provides both an informative visual- ization of the network and a kernel for statistical analysis and machine learning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We use topological data analysis (TDA) to study how data transforms as it
passes through successive layers of a deep neural network (DNN). We compute the
persistent homology of the activation data for each layer of the network and
summarize this information using persistence landscapes. The resulting feature
map provides both an informative visual- ization of the network and a kernel
for statistical analysis and machine learning. We observe that the topological
complexity often increases with training and that the topological complexity
does not decrease with each layer.
Related papers
- Deep neural networks architectures from the perspective of manifold
learning [0.0]
This paper is a comprehensive comparison and description of neural network architectures in terms of ge-ometry and topology.
We focus on the internal representation of neural networks and on the dynamics of changes in the topology and geometry of a data manifold on different layers.
arXiv Detail & Related papers (2023-06-06T04:57:39Z) - Persistence-based operators in machine learning [62.997667081978825]
We introduce a class of persistence-based neural network layers.
Persistence-based layers allow the users to easily inject knowledge about symmetries respected by the data, are equipped with learnable weights, and can be composed with state-of-the-art neural architectures.
arXiv Detail & Related papers (2022-12-28T18:03:41Z) - Rethinking Persistent Homology for Visual Recognition [27.625893409863295]
This paper performs a detailed analysis of the effectiveness of topological properties for image classification in various training scenarios.
We identify the scenarios that benefit the most from topological features, e.g., training simple networks on small datasets.
arXiv Detail & Related papers (2022-07-09T08:01:11Z) - Topological Data Analysis of Neural Network Layer Representations [0.0]
topological features of a simple feedforward neural network's layer representations of a modified torus with a Klein bottle-like twist were computed.
The resulting noise hampered the ability of persistent homology to compute these features.
arXiv Detail & Related papers (2022-07-01T00:51:19Z) - Decomposing neural networks as mappings of correlation functions [57.52754806616669]
We study the mapping between probability distributions implemented by a deep feed-forward network.
We identify essential statistics in the data, as well as different information representations that can be used by neural networks.
arXiv Detail & Related papers (2022-02-10T09:30:31Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Topological obstructions in neural networks learning [67.8848058842671]
We study global properties of the loss gradient function flow.
We use topological data analysis of the loss function and its Morse complex to relate local behavior along gradient trajectories with global properties of the loss surface.
arXiv Detail & Related papers (2020-12-31T18:53:25Z) - Inter-layer Information Similarity Assessment of Deep Neural Networks
Via Topological Similarity and Persistence Analysis of Data Neighbour
Dynamics [93.4221402881609]
The quantitative analysis of information structure through a deep neural network (DNN) can unveil new insights into the theoretical performance of DNN architectures.
Inspired by both LS and ID strategies for quantitative information structure analysis, we introduce two novel complimentary methods for inter-layer information similarity assessment.
We demonstrate their efficacy in this study by performing analysis on a deep convolutional neural network architecture on image data.
arXiv Detail & Related papers (2020-12-07T15:34:58Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z) - PLLay: Efficient Topological Layer based on Persistence Landscapes [24.222495922671442]
PLLay is a novel topological layer for general deep learning models based on persistence landscapes.
We show differentiability with respect to layer inputs, for a general persistent homology with arbitrary filtration.
arXiv Detail & Related papers (2020-02-07T13:34:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.