Function Approximation with Randomly Initialized Neural Networks for
Approximate Model Reference Adaptive Control
- URL: http://arxiv.org/abs/2303.16251v2
- Date: Wed, 5 Apr 2023 16:15:38 GMT
- Title: Function Approximation with Randomly Initialized Neural Networks for
Approximate Model Reference Adaptive Control
- Authors: Tyler Lekang and Andrew Lamperski
- Abstract summary: Recent results have demonstrated that for specialized activation functions, such as ReLUs, high accuracy can be achieved via linear combinations of randomly activations.
This paper defines mollified integral representations, which provide a means to form integral representations of target functions using activations for which no direct integral representation is currently known.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classical results in neural network approximation theory show how arbitrary
continuous functions can be approximated by networks with a single hidden
layer, under mild assumptions on the activation function. However, the
classical theory does not give a constructive means to generate the network
parameters that achieve a desired accuracy. Recent results have demonstrated
that for specialized activation functions, such as ReLUs and some classes of
analytic functions, high accuracy can be achieved via linear combinations of
randomly initialized activations. These recent works utilize specialized
integral representations of target functions that depend on the specific
activation functions used. This paper defines mollified integral
representations, which provide a means to form integral representations of
target functions using activations for which no direct integral representation
is currently known. The new construction enables approximation guarantees for
randomly initialized networks for a variety of widely used activation
functions.
Related papers
- Neural Control Variates with Automatic Integration [49.91408797261987]
This paper proposes a novel approach to construct learnable parametric control variates functions from arbitrary neural network architectures.
We use the network to approximate the anti-derivative of the integrand.
We apply our method to solve partial differential equations using the Walk-on-sphere algorithm.
arXiv Detail & Related papers (2024-09-23T06:04:28Z) - Approximation and interpolation of deep neural networks [0.0]
In the overparametrized regime, deep neural network provide universal approximations and can interpolate any data set.
In the last section, we provide a practical probabilistic method of finding such a point under general conditions on the activation function.
arXiv Detail & Related papers (2023-04-20T08:45:16Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Approximation of Nonlinear Functionals Using Deep ReLU Networks [7.876115370275732]
We investigate the approximation power of functional deep neural networks associated with the rectified linear unit (ReLU) activation function.
In addition, we establish rates of approximation of the proposed functional deep ReLU networks under mild regularity conditions.
arXiv Detail & Related papers (2023-04-10T08:10:11Z) - Data-aware customization of activation functions reduces neural network
error [0.35172332086962865]
We show that data-aware customization of activation functions can result in striking reductions in neural network error.
A simple substitution with the seagull'' activation function in an already-refined neural network can lead to an order-of-magnitude reduction in error.
arXiv Detail & Related papers (2023-01-16T23:38:37Z) - Consensus Function from an $L_p^q-$norm Regularization Term for its Use
as Adaptive Activation Functions in Neural Networks [0.0]
We propose the definition and utilization of an implicit, parametric, non-linear activation function that adapts its shape during the training process.
This fact increases the space of parameters to optimize within the network, but it allows a greater flexibility and generalizes the concept of neural networks.
Preliminary results show that the use of these neural networks with this type of adaptive activation functions reduces the error in regression and classification examples.
arXiv Detail & Related papers (2022-06-30T04:48:14Z) - Benefits of Overparameterized Convolutional Residual Networks: Function
Approximation under Smoothness Constraint [48.25573695787407]
We prove that large ConvResNets can not only approximate a target function in terms of function value, but also exhibit sufficient first-order smoothness.
Our theory partially justifies the benefits of using deep and wide networks in practice.
arXiv Detail & Related papers (2022-06-09T15:35:22Z) - Provable General Function Class Representation Learning in Multitask
Bandits and MDPs [58.624124220900306]
multitask representation learning is a popular approach in reinforcement learning to boost the sample efficiency.
In this work, we extend the analysis to general function class representations.
We theoretically validate the benefit of multitask representation learning within general function class for bandits and linear MDP.
arXiv Detail & Related papers (2022-05-31T11:36:42Z) - Otimizacao de pesos e funcoes de ativacao de redes neurais aplicadas na
previsao de series temporais [0.0]
We propose the use of a family of free parameter asymmetric activation functions for neural networks.
We show that this family of defined activation functions satisfies the requirements of the universal approximation theorem.
A methodology for the global optimization of this family of activation functions with free parameter and the weights of the connections between the processing units of the neural network is used.
arXiv Detail & Related papers (2021-07-29T23:32:15Z) - A Functional Perspective on Learning Symmetric Functions with Neural
Networks [48.80300074254758]
We study the learning and representation of neural networks defined on measures.
We establish approximation and generalization bounds under different choices of regularization.
The resulting models can be learned efficiently and enjoy generalization guarantees that extend across input sizes.
arXiv Detail & Related papers (2020-08-16T16:34:33Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.