Algebraic function based Banach space valued ordinary and fractional
neural network approximations
- URL: http://arxiv.org/abs/2202.07425v1
- Date: Fri, 11 Feb 2022 20:08:52 GMT
- Title: Algebraic function based Banach space valued ordinary and fractional
neural network approximations
- Authors: George A Anastassiou
- Abstract summary: approximations are pointwise and of the uniform norm.
The related Banach space valued feed-forward neural networks are with one hidden layer.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Here we research the univariate quantitative approximation, ordinary and
fractional, of Banach space valued continuous functions on a compact interval
or all the real line by quasi-interpolation Banach space valued neural network
operators. These approximations are derived by establishing Jackson type
inequalities involving the modulus of continuity of the engaged function or its
Banach space valued high order derivative of fractional derivatives. Our
operators are defined by using a density function generated by an algebraic
sigmoid function. The approximations are pointwise and of the uniform norm. The
related Banach space valued feed-forward neural networks are with one hidden
layer.
Related papers
- Universal approximation results for neural networks with non-polynomial activation function over non-compact domains [3.3379026542599934]
We derive universal approximation results for neural networks within function spaces over non-compact subsets of a Euclidean space.
We provide some dimension-independent rates for approximating a function with sufficiently regular and integrable Fourier transform by neural networks with non-polynomial activation function.
arXiv Detail & Related papers (2024-10-18T09:53:20Z) - Universal approximation property of Banach space-valued random feature models including random neural networks [3.3379026542599934]
We introduce a Banach space-valued extension of random feature learning.
By randomly initializing the feature maps, only the linear readout needs to be trained.
We derive approximation rates and an explicit algorithm to learn an element of the given Banach space.
arXiv Detail & Related papers (2023-12-13T11:27:15Z) - Convex Bounds on the Softmax Function with Applications to Robustness
Verification [69.09991317119679]
The softmax function is a ubiquitous component at the output of neural networks and increasingly in intermediate layers as well.
This paper provides convex lower bounds and concave upper bounds on the softmax function, which are compatible with convex optimization formulations for characterizing neural networks and other ML models.
arXiv Detail & Related papers (2023-03-03T05:07:02Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Tangent Bundle Filters and Neural Networks: from Manifolds to Cellular
Sheaves and Back [114.01902073621577]
We use the convolution to define tangent bundle filters and tangent bundle neural networks (TNNs)
We discretize TNNs both in space and time domains, showing that their discrete counterpart is a principled variant of the recently introduced Sheaf Neural Networks.
We numerically evaluate the effectiveness of the proposed architecture on a denoising task of a tangent vector field over the unit 2-sphere.
arXiv Detail & Related papers (2022-10-26T21:55:45Z) - Neural and spectral operator surrogates: unified construction and
expression rate bounds [0.46040036610482665]
We study approximation rates for deep surrogates of maps between infinite-dimensional function spaces.
Operator in- and outputs from function spaces are assumed to be parametrized by stable, affine representation systems.
arXiv Detail & Related papers (2022-07-11T15:35:14Z) - Sobolev-type embeddings for neural network approximation spaces [5.863264019032882]
We consider neural network approximation spaces that classify functions according to the rate at which they can be approximated.
We prove embedding theorems between these spaces for different values of $p$.
We find that, analogous to the case of classical function spaces, it is possible to trade "smoothness" (i.e., approximation rate) for increased integrability.
arXiv Detail & Related papers (2021-10-28T17:11:38Z) - Deep neural network approximation of analytic functions [91.3755431537592]
entropy bound for the spaces of neural networks with piecewise linear activation functions.
We derive an oracle inequality for the expected error of the considered penalized deep neural network estimators.
arXiv Detail & Related papers (2021-04-05T18:02:04Z) - Approximation with Neural Networks in Variable Lebesgue Spaces [1.0152838128195465]
This paper concerns the universal approximation property with neural networks in variable Lebesgue spaces.
We show that, whenever the exponent function of the space is bounded, every function can be approximated with shallow neural networks with any desired accuracy.
arXiv Detail & Related papers (2020-07-08T14:52:48Z) - Space of Functions Computed by Deep-Layered Machines [74.13735716675987]
We study the space of functions computed by random-layered machines, including deep neural networks and Boolean circuits.
Investigating the distribution of Boolean functions computed on the recurrent and layer-dependent architectures, we find that it is the same in both models.
arXiv Detail & Related papers (2020-04-19T18:31:03Z) - Neural Operator: Graph Kernel Network for Partial Differential Equations [57.90284928158383]
This work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators)
We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators.
Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.
arXiv Detail & Related papers (2020-03-07T01:56:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.