Effects of Data Geometry in Early Deep Learning
- URL: http://arxiv.org/abs/2301.00008v1
- Date: Thu, 29 Dec 2022 17:32:05 GMT
- Title: Effects of Data Geometry in Early Deep Learning
- Authors: Saket Tiwari and George Konidaris
- Abstract summary: Deep neural networks can approximate functions on different types of data, from images to graphs, with varied underlying structure.
We study how a randomly neural network with piece-wise linear activation splits the data manifold into regions where the neural network behaves as a linear function.
- Score: 16.967930721746672
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks can approximate functions on different types of data,
from images to graphs, with varied underlying structure. This underlying
structure can be viewed as the geometry of the data manifold. By extending
recent advances in the theoretical understanding of neural networks, we study
how a randomly initialized neural network with piece-wise linear activation
splits the data manifold into regions where the neural network behaves as a
linear function. We derive bounds on the density of boundary of linear regions
and the distance to these boundaries on the data manifold. This leads to
insights into the expressivity of randomly initialized deep neural networks on
non-Euclidean data sets. We empirically corroborate our theoretical results
using a toy supervised learning problem. Our experiments demonstrate that
number of linear regions varies across manifolds and the results hold with
changing neural network architectures. We further demonstrate how the
complexity of linear regions is different on the low dimensional manifold of
images as compared to the Euclidean space, using the MetFaces dataset.
Related papers
- Exploring the Manifold of Neural Networks Using Diffusion Geometry [7.038126249994092]
We learn manifold where datapoints are neural networks by introducing a distance between the hidden layer representations of the neural networks.
These distances are then fed to the non-linear dimensionality reduction algorithm PHATE to create a manifold of neural networks.
Our analysis reveals that high-performing networks cluster together in the manifold, displaying consistent embedding patterns.
arXiv Detail & Related papers (2024-11-19T16:34:45Z) - Deep Learning as Ricci Flow [38.27936710747996]
Deep neural networks (DNNs) are powerful tools for approximating the distribution of complex data.
We show that the transformations performed by DNNs during classification tasks have parallels to those expected under Hamilton's Ricci flow.
Our findings motivate the use of tools from differential and discrete geometry to the problem of explainability in deep learning.
arXiv Detail & Related papers (2024-04-22T15:12:47Z) - Riemannian Residual Neural Networks [58.925132597945634]
We show how to extend the residual neural network (ResNet)
ResNets have become ubiquitous in machine learning due to their beneficial learning properties, excellent empirical results, and easy-to-incorporate nature when building varied neural networks.
arXiv Detail & Related papers (2023-10-16T02:12:32Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Deep neural networks architectures from the perspective of manifold
learning [0.0]
This paper is a comprehensive comparison and description of neural network architectures in terms of ge-ometry and topology.
We focus on the internal representation of neural networks and on the dynamics of changes in the topology and geometry of a data manifold on different layers.
arXiv Detail & Related papers (2023-06-06T04:57:39Z) - Bayesian Interpolation with Deep Linear Networks [92.1721532941863]
Characterizing how neural network depth, width, and dataset size jointly impact model quality is a central problem in deep learning theory.
We show that linear networks make provably optimal predictions at infinite depth.
We also show that with data-agnostic priors, Bayesian model evidence in wide linear networks is maximized at infinite depth.
arXiv Detail & Related papers (2022-12-29T20:57:46Z) - Convolutional Neural Networks on Manifolds: From Graphs and Back [122.06927400759021]
We propose a manifold neural network (MNN) composed of a bank of manifold convolutional filters and point-wise nonlinearities.
To sum up, we focus on the manifold model as the limit of large graphs and construct MNNs, while we can still bring back graph neural networks by the discretization of MNNs.
arXiv Detail & Related papers (2022-10-01T21:17:39Z) - Side-effects of Learning from Low Dimensional Data Embedded in an
Euclidean Space [3.093890460224435]
We study the potential regularization effects associated with the network's depth and noise in needs codimension of the data manifold.
We also present additional side effects in training due to the presence of noise.
arXiv Detail & Related papers (2022-03-01T16:55:51Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Dive into Layers: Neural Network Capacity Bounding using Algebraic
Geometry [55.57953219617467]
We show that the learnability of a neural network is directly related to its size.
We use Betti numbers to measure the topological geometric complexity of input data and the neural network.
We perform the experiments on a real-world dataset MNIST and the results verify our analysis and conclusion.
arXiv Detail & Related papers (2021-09-03T11:45:51Z) - Bounding The Number of Linear Regions in Local Area for Neural Networks
with ReLU Activations [6.4817648240626005]
We present the first method to estimate the upper bound of the number of linear regions in any sphere in the input space of a given ReLU neural network.
Our experiments showed that, while training a neural network, the boundaries of the linear regions tend to move away from the training data points.
arXiv Detail & Related papers (2020-07-14T04:06:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.