Entropy Maximization with Depth: A Variational Principle for Random
Neural Networks
- URL: http://arxiv.org/abs/2205.13076v1
- Date: Wed, 25 May 2022 23:00:26 GMT
- Title: Entropy Maximization with Depth: A Variational Principle for Random
Neural Networks
- Authors: Amir Joudaki, Hadi Daneshmand, Francis Bach
- Abstract summary: We prove that random neural networks equipped with batch normalization maximize the differential entropy of representations with depth up to constant factors.
Our variational formulation for neural representations characterizes the interplay between representation entropy and architectural components.
- Score: 1.864159622659575
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To understand the essential role of depth in neural networks, we investigate
a variational principle for depth: Does increasing depth perform an implicit
optimization for the representations in neural networks? We prove that random
neural networks equipped with batch normalization maximize the differential
entropy of representations with depth up to constant factors, assuming that the
representations are contractive. Thus, representations inherently obey the
\textit{principle of maximum entropy} at initialization, in the absence of
information about the learning task. Our variational formulation for neural
representations characterizes the interplay between representation entropy and
architectural components, including depth, width, and non-linear activations,
thereby potentially inspiring the design of neural architectures.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Bayesian Interpolation with Deep Linear Networks [92.1721532941863]
Characterizing how neural network depth, width, and dataset size jointly impact model quality is a central problem in deep learning theory.
We show that linear networks make provably optimal predictions at infinite depth.
We also show that with data-agnostic priors, Bayesian model evidence in wide linear networks is maximized at infinite depth.
arXiv Detail & Related papers (2022-12-29T20:57:46Z) - On the Approximation and Complexity of Deep Neural Networks to Invariant
Functions [0.0]
We study the approximation and complexity of deep neural networks to invariant functions.
We show that a broad range of invariant functions can be approximated by various types of neural network models.
We provide a feasible application that connects the parameter estimation and forecasting of high-resolution signals with our theoretical conclusions.
arXiv Detail & Related papers (2022-10-27T09:19:19Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Training Integrable Parameterizations of Deep Neural Networks in the
Infinite-Width Limit [0.0]
Large-width dynamics has emerged as a fruitful viewpoint and led to practical insights on real-world deep networks.
For two-layer neural networks, it has been understood that the nature of the trained model radically changes depending on the scale of the initial random weights.
We propose various methods to avoid this trivial behavior and analyze in detail the resulting dynamics.
arXiv Detail & Related papers (2021-10-29T07:53:35Z) - Neural Network Gaussian Processes by Increasing Depth [0.6091702876917281]
We show that increasing the depth of a neural network can give rise to a Gaussian process.
We also theoretically characterize its uniform tightness property and the smallest eigenvalue of its associated kernel.
These characterizations can not only enhance our understanding of the proposed depth-induced Gaussian processes, but also pave the way for future applications.
arXiv Detail & Related papers (2021-08-29T15:37:26Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Depth-Width Trade-offs for Neural Networks via Topological Entropy [0.0]
We show a new connection between the expressivity of deep neural networks and topological entropy from dynamical system.
We discuss the relationship between topological entropy, the number of oscillations, periods and Lipschitz constant.
arXiv Detail & Related papers (2020-10-15T08:14:44Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Understanding Generalization in Deep Learning via Tensor Methods [53.808840694241]
We advance the understanding of the relations between the network's architecture and its generalizability from the compression perspective.
We propose a series of intuitive, data-dependent and easily-measurable properties that tightly characterize the compressibility and generalizability of neural networks.
arXiv Detail & Related papers (2020-01-14T22:26:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.