Global universal approximation of functional input maps on weighted
spaces
- URL: http://arxiv.org/abs/2306.03303v3
- Date: Fri, 1 Mar 2024 02:17:43 GMT
- Title: Global universal approximation of functional input maps on weighted
spaces
- Authors: Christa Cuchiero, Philipp Schmocker, Josef Teichmann
- Abstract summary: We introduce so-called functional input neural networks defined on a possibly infinite dimensional weighted space with values also in a possibly infinite dimensional output space.
We prove a global universal approximation result on weighted spaces for continuous functions going beyond the usual approximation on compact sets.
We emphasize that the reproducing Hilbert kernel space of the signature kernels are Cameron-Martin spaces of certain Gaussian processes.
- Score: 3.8059763597999012
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce so-called functional input neural networks defined on a possibly
infinite dimensional weighted space with values also in a possibly infinite
dimensional output space. To this end, we use an additive family to map the
input weighted space to the hidden layer, on which a non-linear scalar
activation function is applied to each neuron, and finally return the output
via some linear readouts. Relying on Stone-Weierstrass theorems on weighted
spaces, we can prove a global universal approximation result on weighted spaces
for continuous functions going beyond the usual approximation on compact sets.
This then applies in particular to approximation of (non-anticipative) path
space functionals via functional input neural networks. As a further
application of the weighted Stone-Weierstrass theorem we prove a global
universal approximation result for linear functions of the signature. We also
introduce the viewpoint of Gaussian process regression in this setting and
emphasize that the reproducing kernel Hilbert space of the signature kernels
are Cameron-Martin spaces of certain Gaussian processes. This paves a way
towards uncertainty quantification for signature kernel regression.
Related papers
- Kernel Operator-Theoretic Bayesian Filter for Nonlinear Dynamical Systems [25.922732994397485]
We propose a machine-learning alternative based on a functional Bayesian perspective for operator-theoretic modeling.
This formulation is directly done in an infinite-dimensional space of linear operators or Hilbert space with universal approximation property.
We demonstrate that this practical approach can obtain accurate results and outperform finite-dimensional Koopman decomposition.
arXiv Detail & Related papers (2024-10-31T20:31:31Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Universal approximation property of Banach space-valued random feature models including random neural networks [3.3379026542599934]
We introduce a Banach space-valued extension of random feature learning.
By randomly initializing the feature maps, only the linear readout needs to be trained.
We derive approximation rates and an explicit algorithm to learn an element of the given Banach space.
arXiv Detail & Related papers (2023-12-13T11:27:15Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Continuous percolation in a Hilbert space for a large system of qubits [58.720142291102135]
The percolation transition is defined through the appearance of the infinite cluster.
We show that the exponentially increasing dimensionality of the Hilbert space makes its covering by finite-size hyperspheres inefficient.
Our approach to the percolation transition in compact metric spaces may prove useful for its rigorous treatment in other contexts.
arXiv Detail & Related papers (2022-10-15T13:53:21Z) - Computationally Efficient PAC RL in POMDPs with Latent Determinism and
Conditional Embeddings [97.12538243736705]
We study reinforcement learning with function approximation for large-scale Partially Observable Decision Processes (POMDPs)
Our algorithm provably scales to large-scale POMDPs.
arXiv Detail & Related papers (2022-06-24T05:13:35Z) - Deep neural network approximation of analytic functions [91.3755431537592]
entropy bound for the spaces of neural networks with piecewise linear activation functions.
We derive an oracle inequality for the expected error of the considered penalized deep neural network estimators.
arXiv Detail & Related papers (2021-04-05T18:02:04Z) - Large-width functional asymptotics for deep Gaussian neural networks [2.7561479348365734]
We consider fully connected feed-forward deep neural networks where weights and biases are independent and identically distributed according to Gaussian distributions.
Our results contribute to recent theoretical studies on the interplay between infinitely wide deep neural networks and processes.
arXiv Detail & Related papers (2021-02-20T10:14:37Z) - Quantitative Rates and Fundamental Obstructions to Non-Euclidean
Universal Approximation with Deep Narrow Feed-Forward Networks [3.8073142980733]
We quantify the number of narrow layers required for "deep geometric feed-forward neural networks"
We find that both the global and local universal approximation guarantees can only coincide when approximating null-homotopic functions.
arXiv Detail & Related papers (2021-01-13T23:29:40Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z) - Approximation with Neural Networks in Variable Lebesgue Spaces [1.0152838128195465]
This paper concerns the universal approximation property with neural networks in variable Lebesgue spaces.
We show that, whenever the exponent function of the space is bounded, every function can be approximated with shallow neural networks with any desired accuracy.
arXiv Detail & Related papers (2020-07-08T14:52:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.