Space of Functions Computed by Deep-Layered Machines
- URL: http://arxiv.org/abs/2004.08930v3
- Date: Wed, 14 Oct 2020 01:36:46 GMT
- Title: Space of Functions Computed by Deep-Layered Machines
- Authors: Alexander Mozeika and Bo Li and David Saad
- Abstract summary: We study the space of functions computed by random-layered machines, including deep neural networks and Boolean circuits.
Investigating the distribution of Boolean functions computed on the recurrent and layer-dependent architectures, we find that it is the same in both models.
- Score: 74.13735716675987
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the space of functions computed by random-layered machines,
including deep neural networks and Boolean circuits. Investigating the
distribution of Boolean functions computed on the recurrent and layer-dependent
architectures, we find that it is the same in both models. Depending on the
initial conditions and computing elements used, we characterize the space of
functions computed at the large depth limit and show that the macroscopic
entropy of Boolean functions is either monotonically increasing or decreasing
with the growing depth.
Related papers
- Spherical Analysis of Learning Nonlinear Functionals [10.785977740158193]
In this paper, we consider functionals defined on sets of functions on spheres.
The approximation ability of deep ReLU neural networks is investigated using an encoder-decoder framework.
arXiv Detail & Related papers (2024-10-01T20:10:00Z) - Approximation of RKHS Functionals by Neural Networks [30.42446856477086]
We study the approximation of functionals on kernel reproducing Hilbert spaces (RKHS's) using neural networks.
We derive explicit error bounds for those induced by inverse multiquadric, Gaussian, and Sobolev kernels.
We apply our findings to functional regression, proving that neural networks can accurately approximate the regression maps.
arXiv Detail & Related papers (2024-03-18T18:58:23Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Provable Data Subset Selection For Efficient Neural Network Training [73.34254513162898]
We introduce the first algorithm to construct coresets for emphRBFNNs, i.e., small weighted subsets that approximate the loss of the input data on any radial basis function network.
We then perform empirical evaluations on function approximation and dataset subset selection on popular network architectures and data sets.
arXiv Detail & Related papers (2023-03-09T10:08:34Z) - Spontaneous Emergence of Computation in Network Cascades [0.7734726150561089]
We show that computation of complex Boolean functions arises spontaneously in threshold networks as a function of connectivity and antagonism (inhibition)
We also show that the optimal fraction of inhibition observed here supports results in computational neuroscience, relating to optimal information processing.
arXiv Detail & Related papers (2022-04-25T20:35:09Z) - Neural Network Approximation of Refinable Functions [8.323468006516018]
We show that refinable functions are approximated by the outputs of deep ReLU networks with a fixed width and increasing depth with accuracy exponential.
Our results apply to functions used in the standard construction of wavelets as well as to functions constructed via subdivision algorithms in Computer Aided Geometric Design.
arXiv Detail & Related papers (2021-07-28T06:45:36Z) - Compressing Deep ODE-Nets using Basis Function Expansions [105.05435207079759]
We consider formulations of the weights as continuous-depth functions using linear combinations of basis functions.
This perspective allows us to compress the weights through a change of basis, without retraining, while maintaining near state-of-the-art performance.
In turn, both inference time and the memory footprint are reduced, enabling quick and rigorous adaptation between computational environments.
arXiv Detail & Related papers (2021-06-21T03:04:51Z) - Representation Theorem for Matrix Product States [1.7894377200944511]
We investigate the universal representation capacity of the Matrix Product States (MPS) from the perspective of functions and continuous functions.
We show that MPS can accurately realize arbitrary functions by providing a construction method of the corresponding MPS structure for an arbitrarily given gate.
We study the relation between MPS and neural networks and show that the MPS with a scale-invariant sigmoidal function is equivalent to a one-hidden-layer neural network.
arXiv Detail & Related papers (2021-03-15T11:06:54Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - Hyperbolic Neural Networks++ [66.16106727715061]
We generalize the fundamental components of neural networks in a single hyperbolic geometry model, namely, the Poincar'e ball model.
Experiments show the superior parameter efficiency of our methods compared to conventional hyperbolic components, and stability and outperformance over their Euclidean counterparts.
arXiv Detail & Related papers (2020-06-15T08:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.