Complexity for deep neural networks and other characteristics of deep
feature representations
- URL: http://arxiv.org/abs/2006.04791v2
- Date: Wed, 17 Mar 2021 14:50:33 GMT
- Title: Complexity for deep neural networks and other characteristics of deep
feature representations
- Authors: Romuald A. Janik, Przemek Witaszczyk
- Abstract summary: We define a notion of complexity, which quantifies the nonlinearity of the computation of a neural network.
We investigate these observables both for trained networks as well as explore their dynamics during training.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We define a notion of complexity, which quantifies the nonlinearity of the
computation of a neural network, as well as a complementary measure of the
effective dimension of feature representations. We investigate these
observables both for trained networks for various datasets as well as explore
their dynamics during training, uncovering in particular power law scaling.
These observables can be understood in a dual way as uncovering hidden internal
structure of the datasets themselves as a function of scale or depth. The
entropic character of the proposed notion of complexity should allow to
transfer modes of analysis from neuroscience and statistical physics to the
domain of artificial neural networks. The introduced observables can be applied
without any change to the analysis of biological neuronal systems.
Related papers
- Dynamic neurons: A statistical physics approach for analyzing deep neural networks [1.9662978733004601]
We treat neurons as additional degrees of freedom in interactions, simplifying the structure of deep neural networks.
By utilizing translational symmetry and renormalization group transformations, we can analyze critical phenomena.
This approach may open new avenues for studying deep neural networks using statistical physics.
arXiv Detail & Related papers (2024-10-01T04:39:04Z) - Statistical tuning of artificial neural network [0.0]
This study introduces methods to enhance the understanding of neural networks, focusing specifically on models with a single hidden layer.
We propose statistical tests to assess the significance of input neurons and introduce algorithms for dimensionality reduction.
This research advances the field of Explainable Artificial Intelligence by presenting robust statistical frameworks for interpreting neural networks.
arXiv Detail & Related papers (2024-09-24T19:47:03Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - The semantic landscape paradigm for neural networks [0.0]
We introduce the semantic landscape paradigm, a conceptual and mathematical framework that describes the training dynamics of neural networks.
Specifically, we show that grokking and emergence with scale are associated with percolation phenomena, and neural scaling laws are explainable in terms of the statistics of random walks on graphs.
arXiv Detail & Related papers (2023-07-18T18:48:54Z) - Persistence-based operators in machine learning [62.997667081978825]
We introduce a class of persistence-based neural network layers.
Persistence-based layers allow the users to easily inject knowledge about symmetries respected by the data, are equipped with learnable weights, and can be composed with state-of-the-art neural architectures.
arXiv Detail & Related papers (2022-12-28T18:03:41Z) - On the Approximation and Complexity of Deep Neural Networks to Invariant
Functions [0.0]
We study the approximation and complexity of deep neural networks to invariant functions.
We show that a broad range of invariant functions can be approximated by various types of neural network models.
We provide a feasible application that connects the parameter estimation and forecasting of high-resolution signals with our theoretical conclusions.
arXiv Detail & Related papers (2022-10-27T09:19:19Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.