Neural networks with linear threshold activations: structure and
algorithms
- URL: http://arxiv.org/abs/2111.08117v4
- Date: Wed, 18 Oct 2023 18:40:09 GMT
- Title: Neural networks with linear threshold activations: structure and
algorithms
- Authors: Sammy Khalife, Hongyu Cheng, Amitabh Basu
- Abstract summary: We show that 2 hidden layers are necessary and sufficient to represent any function representable in the class.
We also give precise bounds on the sizes of the neural networks required to represent any function in the class.
We propose a new class of neural networks that we call shortcut linear threshold networks.
- Score: 1.795561427808824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this article we present new results on neural networks with linear
threshold activation functions. We precisely characterize the class of
functions that are representable by such neural networks and show that 2 hidden
layers are necessary and sufficient to represent any function representable in
the class. This is a surprising result in the light of recent exact
representability investigations for neural networks using other popular
activation functions like rectified linear units (ReLU). We also give precise
bounds on the sizes of the neural networks required to represent any function
in the class. Finally, we design an algorithm to solve the empirical risk
minimization (ERM) problem to global optimality for these neural networks with
a fixed architecture. The algorithm's running time is polynomial in the size of
the data sample, if the input dimension and the size of the network
architecture are considered fixed constants. The algorithm is unique in the
sense that it works for any architecture with any number of layers, whereas
previous polynomial time globally optimal algorithms work only for very
restricted classes of architectures. Using these insights, we propose a new
class of neural networks that we call shortcut linear threshold networks. To
the best of our knowledge, this way of designing neural networks has not been
explored before in the literature. We show that these neural networks have
several desirable theoretical properties.
Related papers
- Lipschitz constant estimation for general neural network architectures using control tools [0.05120567378386613]
This paper is devoted to the estimation of the Lipschitz constant of general neural network architectures using semidefinite programming.
We interpret neural networks as time-varying dynamical systems, where the $k$th layer corresponds to the dynamics at time $k$.
arXiv Detail & Related papers (2024-05-02T09:38:16Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - When Deep Learning Meets Polyhedral Theory: A Survey [6.899761345257773]
In the past decade, deep became the prevalent methodology for predictive modeling thanks to the remarkable accuracy of deep neural learning.
Meanwhile, the structure of neural networks converged back to simplerwise and linear functions.
arXiv Detail & Related papers (2023-04-29T11:46:53Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Dynamical systems' based neural networks [0.7874708385247353]
We build neural networks using a suitable, structure-preserving, numerical time-discretisation.
The structure of the neural network is then inferred from the properties of the ODE vector field.
We present two universal approximation results and demonstrate how to impose some particular properties on the neural networks.
arXiv Detail & Related papers (2022-10-05T16:30:35Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Wide and Deep Neural Networks Achieve Optimality for Classification [23.738242876364865]
We identify and construct an explicit set of neural network classifiers that achieve optimality.
In particular, we provide explicit activation functions that can be used to construct networks that achieve optimality.
Our results highlight the benefit of using deep networks for classification tasks, in contrast to regression tasks, where excessive depth is harmful.
arXiv Detail & Related papers (2022-04-29T14:27:42Z) - Optimal Learning Rates of Deep Convolutional Neural Networks: Additive
Ridge Functions [19.762318115851617]
We consider the mean squared error analysis for deep convolutional neural networks.
We show that, for additive ridge functions, convolutional neural networks followed by one fully connected layer with ReLU activation functions can reach optimal mini-max rates.
arXiv Detail & Related papers (2022-02-24T14:22:32Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.