Analytical aspects of non-differentiable neural networks
- URL: http://arxiv.org/abs/2011.01858v1
- Date: Tue, 3 Nov 2020 17:20:43 GMT
- Title: Analytical aspects of non-differentiable neural networks
- Authors: Gian Paolo Leonardi and Matteo Spallanzani
- Abstract summary: We discuss the expressivity of quantized neural networks and approximation techniques for non-differentiable networks.
We show that QNNs have the same expressivity as DNNs in terms of approximation of Lipschitz functions in the $Linfty$ norm.
We also consider networks defined by means of Heaviside-type activation functions, and prove for them a pointwise approximation result by means of smooth networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Research in computational deep learning has directed considerable efforts
towards hardware-oriented optimisations for deep neural networks, via the
simplification of the activation functions, or the quantization of both
activations and weights. The resulting non-differentiability (or even
discontinuity) of the networks poses some challenging problems, especially in
connection with the learning process. In this paper, we address several
questions regarding both the expressivity of quantized neural networks and
approximation techniques for non-differentiable networks. First, we answer in
the affirmative the question of whether QNNs have the same expressivity as DNNs
in terms of approximation of Lipschitz functions in the $L^{\infty}$ norm.
Then, considering a continuous but not necessarily differentiable network, we
describe a layer-wise stochastic regularisation technique to produce
differentiable approximations, and we show how this approach to regularisation
provides elegant quantitative estimates. Finally, we consider networks defined
by means of Heaviside-type activation functions, and prove for them a pointwise
approximation result by means of smooth networks under suitable assumptions on
the regularised activations.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - GD doesn't make the cut: Three ways that non-differentiability affects neural network training [5.439020425819001]
This paper critically examines the distinctions between methods applied to non-differentiable functions (NGDMs) and classical gradient descents (GDs) for differentiable functions.
Our work identifies critical misunderstandings of algorithms in influential literature, stemming from an overreliance on strong assumptions.
arXiv Detail & Related papers (2024-01-16T15:11:29Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - On the Approximation and Complexity of Deep Neural Networks to Invariant
Functions [0.0]
We study the approximation and complexity of deep neural networks to invariant functions.
We show that a broad range of invariant functions can be approximated by various types of neural network models.
We provide a feasible application that connects the parameter estimation and forecasting of high-resolution signals with our theoretical conclusions.
arXiv Detail & Related papers (2022-10-27T09:19:19Z) - Approximation Power of Deep Neural Networks: an explanatory mathematical
survey [0.0]
The goal of this survey is to present an explanatory review of the approximation properties of deep neural networks.
We aim at understanding how and why deep neural networks outperform other classical linear and nonlinear approximation methods.
arXiv Detail & Related papers (2022-07-19T18:47:44Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - An Overview of Uncertainty Quantification Methods for Infinite Neural
Networks [0.0]
We review methods for quantifying uncertainty in infinite-width neural networks.
We make use of several equivalence results along the way to obtain exact closed-form solutions for predictive uncertainty.
arXiv Detail & Related papers (2022-01-13T00:03:22Z) - Training Integrable Parameterizations of Deep Neural Networks in the
Infinite-Width Limit [0.0]
Large-width dynamics has emerged as a fruitful viewpoint and led to practical insights on real-world deep networks.
For two-layer neural networks, it has been understood that the nature of the trained model radically changes depending on the scale of the initial random weights.
We propose various methods to avoid this trivial behavior and analyze in detail the resulting dynamics.
arXiv Detail & Related papers (2021-10-29T07:53:35Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.