Applicability of Random Matrix Theory in Deep Learning
- URL: http://arxiv.org/abs/2102.06740v1
- Date: Fri, 12 Feb 2021 19:49:19 GMT
- Title: Applicability of Random Matrix Theory in Deep Learning
- Authors: Nicholas P Baskerville and Diego Granziol and Jonathan P Keating
- Abstract summary: We investigate the local spectral statistics of the loss surface Hessians of artificial neural networks.
Our results shed new light on the applicability of Random Matrix Theory to modelling neural networks.
We propose a novel model for the true loss surfaces of neural networks.
- Score: 0.966840768820136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate the local spectral statistics of the loss surface Hessians of
artificial neural networks, where we discover excellent agreement with Gaussian
Orthogonal Ensemble statistics across several network architectures and
datasets. These results shed new light on the applicability of Random Matrix
Theory to modelling neural networks and suggest a previously unrecognised role
for it in the study of loss surfaces in deep learning. Inspired by these
observations, we propose a novel model for the true loss surfaces of neural
networks, consistent with our observations, which allows for Hessian spectral
densities with rank degeneracy and outliers, extensively observed in practice,
and predicts a growing independence of loss gradients as a function of distance
in weight-space. We further investigate the importance of the true loss surface
in neural networks and find, in contrast to previous work, that the exponential
hardness of locating the global minimum has practical consequences for
achieving state of the art performance.
Related papers
- A Subsampling Based Neural Network for Spatial Data [0.0]
This article proposes a consistent localized two-layer deep neural network-based regression for spatial data.
We empirically observe the rate of convergence of discrepancy measures between the empirical probability distribution of observed and predicted data, which will become faster for a less smooth spatial surface.
This application is an effective showcase of non-linear spatial regression.
arXiv Detail & Related papers (2024-11-06T02:37:43Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Quasi-orthogonality and intrinsic dimensions as measures of learning and
generalisation [55.80128181112308]
We show that dimensionality and quasi-orthogonality of neural networks' feature space may jointly serve as network's performance discriminants.
Our findings suggest important relationships between the networks' final performance and properties of their randomly initialised feature spaces.
arXiv Detail & Related papers (2022-03-30T21:47:32Z) - A Local Geometric Interpretation of Feature Extraction in Deep
Feedforward Neural Networks [13.159994710917022]
In this paper, we present a local geometric analysis to interpret how deep feedforward neural networks extract low-dimensional features from high-dimensional data.
Our study shows that, in a local geometric region, the optimal weight in one layer of the neural network and the optimal feature generated by the previous layer comprise a low-rank approximation of a matrix that is determined by the Bayes action of this layer.
arXiv Detail & Related papers (2022-02-09T18:50:00Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - With Greater Distance Comes Worse Performance: On the Perspective of
Layer Utilization and Model Generalization [3.6321778403619285]
Generalization of deep neural networks remains one of the main open problems in machine learning.
Early layers generally learn representations relevant to performance on both training data and testing data.
Deeper layers only minimize training risks and fail to generalize well with testing or mislabeled data.
arXiv Detail & Related papers (2022-01-28T05:26:32Z) - The Interplay Between Implicit Bias and Benign Overfitting in Two-Layer
Linear Networks [51.1848572349154]
neural network models that perfectly fit noisy data can generalize well to unseen test data.
We consider interpolating two-layer linear neural networks trained with gradient flow on the squared loss and derive bounds on the excess risk.
arXiv Detail & Related papers (2021-08-25T22:01:01Z) - Topological obstructions in neural networks learning [67.8848058842671]
We study global properties of the loss gradient function flow.
We use topological data analysis of the loss function and its Morse complex to relate local behavior along gradient trajectories with global properties of the loss surface.
arXiv Detail & Related papers (2020-12-31T18:53:25Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z) - The Loss Surfaces of Neural Networks with General Activation Functions [0.0]
We chart a new path through the spin glass complexity calculations using supersymmetric methods in Random Matrix Theory.
Our results shed new light on both the strengths and the weaknesses of spin glass models in this context.
arXiv Detail & Related papers (2020-04-08T12:19:25Z) - Avoiding Spurious Local Minima in Deep Quadratic Networks [0.0]
We characterize the landscape of the mean squared nonlinear error for networks with neural activation functions.
We prove that deepized neural networks with quadratic activations benefit from similar landscape properties.
arXiv Detail & Related papers (2019-12-31T22:31:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.