Exploring the Manifold of Neural Networks Using Diffusion Geometry
- URL: http://arxiv.org/abs/2411.12626v1
- Date: Tue, 19 Nov 2024 16:34:45 GMT
- Title: Exploring the Manifold of Neural Networks Using Diffusion Geometry
- Authors: Elliott Abel, Peyton Crevasse, Yvan Grinspan, Selma Mazioud, Folu Ogundipe, Kristof Reimann, Ellie Schueler, Andrew J. Steindl, Ellen Zhang, Dhananjay Bhaskar, Siddharth Viswanath, Yanlei Zhang, Tim G. J. Rudner, Ian Adelstein, Smita Krishnaswamy,
- Abstract summary: We learn manifold where datapoints are neural networks by introducing a distance between the hidden layer representations of the neural networks.
These distances are then fed to the non-linear dimensionality reduction algorithm PHATE to create a manifold of neural networks.
Our analysis reveals that high-performing networks cluster together in the manifold, displaying consistent embedding patterns.
- Score: 7.038126249994092
- License:
- Abstract: Drawing motivation from the manifold hypothesis, which posits that most high-dimensional data lies on or near low-dimensional manifolds, we apply manifold learning to the space of neural networks. We learn manifolds where datapoints are neural networks by introducing a distance between the hidden layer representations of the neural networks. These distances are then fed to the non-linear dimensionality reduction algorithm PHATE to create a manifold of neural networks. We characterize this manifold using features of the representation, including class separation, hierarchical cluster structure, spectral entropy, and topological structure. Our analysis reveals that high-performing networks cluster together in the manifold, displaying consistent embedding patterns across all these features. Finally, we demonstrate the utility of this approach for guiding hyperparameter optimization and neural architecture search by sampling from the manifold.
Related papers
- A Theoretical Study of Neural Network Expressive Power via Manifold Topology [9.054396245059555]
A prevalent assumption regarding real-world data is that it lies on or close to a low-dimensional manifold.
In this study, we investigate network expressive power in terms of the latent data manifold.
We present a size upper bound of ReLU neural networks.
arXiv Detail & Related papers (2024-10-21T22:10:24Z) - Convergence Analysis for Deep Sparse Coding via Convolutional Neural Networks [7.956678963695681]
We introduce a novel class of Deep Sparse Coding (DSC) models.
We derive convergence rates for CNNs in their ability to extract sparse features.
Inspired by the strong connection between sparse coding and CNNs, we explore training strategies to encourage neural networks to learn more sparse features.
arXiv Detail & Related papers (2024-08-10T12:43:55Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Riemannian Residual Neural Networks [58.925132597945634]
We show how to extend the residual neural network (ResNet)
ResNets have become ubiquitous in machine learning due to their beneficial learning properties, excellent empirical results, and easy-to-incorporate nature when building varied neural networks.
arXiv Detail & Related papers (2023-10-16T02:12:32Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Data Topology-Dependent Upper Bounds of Neural Network Widths [52.58441144171022]
We first show that a three-layer neural network can be designed to approximate an indicator function over a compact set.
This is then extended to a simplicial complex, deriving width upper bounds based on its topological structure.
We prove the universal approximation property of three-layer ReLU networks using our topological approach.
arXiv Detail & Related papers (2023-05-25T14:17:15Z) - Effects of Data Geometry in Early Deep Learning [16.967930721746672]
Deep neural networks can approximate functions on different types of data, from images to graphs, with varied underlying structure.
We study how a randomly neural network with piece-wise linear activation splits the data manifold into regions where the neural network behaves as a linear function.
arXiv Detail & Related papers (2022-12-29T17:32:05Z) - A Convergence Rate for Manifold Neural Networks [6.428026202398116]
We introduce a method for constructing manifold neural networks using the spectral decomposition of the Laplace Beltrami operator.
We build upon this result by establishing a rate of convergence that depends on the intrinsic dimension of the manifold.
We also discuss how the rate of convergence depends on the depth of the network and the number of filters used in each layer.
arXiv Detail & Related papers (2022-12-23T22:44:25Z) - Experimental Observations of the Topology of Convolutional Neural
Network Activations [2.4235626091331737]
Topological data analysis provides compact, noise-robust representations of complex structures.
Deep neural networks (DNNs) learn millions of parameters associated with a series of transformations defined by the model architecture.
In this paper, we apply cutting edge techniques from TDA with the goal of gaining insight into the interpretability of convolutional neural networks used for image classification.
arXiv Detail & Related papers (2022-12-01T02:05:44Z) - Convolutional Neural Networks on Manifolds: From Graphs and Back [122.06927400759021]
We propose a manifold neural network (MNN) composed of a bank of manifold convolutional filters and point-wise nonlinearities.
To sum up, we focus on the manifold model as the limit of large graphs and construct MNNs, while we can still bring back graph neural networks by the discretization of MNNs.
arXiv Detail & Related papers (2022-10-01T21:17:39Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.