On shallow feedforward neural networks with inputs from a topological space
- URL: http://arxiv.org/abs/2504.02321v1
- Date: Thu, 03 Apr 2025 06:48:46 GMT
- Title: On shallow feedforward neural networks with inputs from a topological space
- Authors: Vugar Ismailov,
- Abstract summary: We study feedforward neural networks with inputs from a topological space (TFNNs)<n>We prove a universal approximation theorem for shallow TFNNs, which demonstrates their capacity to approximate any continuous function defined on this topological space.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study feedforward neural networks with inputs from a topological space (TFNNs). We prove a universal approximation theorem for shallow TFNNs, which demonstrates their capacity to approximate any continuous function defined on this topological space. As an application, we obtain an approximative version of Kolmogorov's superposition theorem for compact metric spaces.
Related papers
- Universal approximation theorem for neural networks with inputs from a topological vector space [0.0]
We study feedforward neural networks with inputs from a topological vector space (TVS-FNNs)
Unlike traditional feedforward neural networks, TVS-FNNs can process a broader range of inputs, including sequences, matrices, functions and more.
arXiv Detail & Related papers (2024-09-19T17:10:14Z) - Neural reproducing kernel Banach spaces and representer theorems for
deep networks [16.279502878600184]
We show that deep neural networks define suitable reproducing kernel Banach spaces.
We derive representer theorems that justify the finite architectures commonly employed in applications.
arXiv Detail & Related papers (2024-03-13T17:51:02Z) - On Neural Networks as Infinite Tree-Structured Probabilistic Graphical Models [44.676210493587256]
We propose an innovative solution by constructing infinite tree-structured PGMs that correspond exactly to neural networks.<n>Our research reveals that DNNs, during forward propagation, indeed perform approximations of PGM inference that are precise in this alternative PGM structure.
arXiv Detail & Related papers (2023-05-27T21:32:28Z) - Universal Approximation and the Topological Neural Network [0.0]
A topological neural network (TNN) takes data from a Tychonoff topological space instead of the usual finite dimensional space.
A distributional neural network (DNN) that takes Borel measures as data is also introduced.
arXiv Detail & Related papers (2023-05-26T05:28:10Z) - Data Topology-Dependent Upper Bounds of Neural Network Widths [52.58441144171022]
We first show that a three-layer neural network can be designed to approximate an indicator function over a compact set.
This is then extended to a simplicial complex, deriving width upper bounds based on its topological structure.
We prove the universal approximation property of three-layer ReLU networks using our topological approach.
arXiv Detail & Related papers (2023-05-25T14:17:15Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - Tangent Bundle Filters and Neural Networks: from Manifolds to Cellular
Sheaves and Back [114.01902073621577]
We use the convolution to define tangent bundle filters and tangent bundle neural networks (TNNs)
We discretize TNNs both in space and time domains, showing that their discrete counterpart is a principled variant of the recently introduced Sheaf Neural Networks.
We numerically evaluate the effectiveness of the proposed architecture on a denoising task of a tangent vector field over the unit 2-sphere.
arXiv Detail & Related papers (2022-10-26T21:55:45Z) - A singular Riemannian geometry approach to Deep Neural Networks I.
Theoretical foundations [77.86290991564829]
Deep Neural Networks are widely used for solving complex problems in several scientific areas, such as speech recognition, machine translation, image analysis.
We study a particular sequence of maps between manifold, with the last manifold of the sequence equipped with a Riemannian metric.
We investigate the theoretical properties of the maps of such sequence, eventually we focus on the case of maps between implementing neural networks of practical interest.
arXiv Detail & Related papers (2021-12-17T11:43:30Z) - Universal Approximation Theorem for Neural Networks [0.0]
The "Universal Approximation Theorem for Neural Networks" states that a neural network is dense in a certain function space under an appropriate setting.
This paper is a comprehensive explanation of the universal approximation theorem for feedforward neural networks.
arXiv Detail & Related papers (2021-02-19T08:25:24Z) - A Convergence Theory Towards Practical Over-parameterized Deep Neural
Networks [56.084798078072396]
We take a step towards closing the gap between theory and practice by significantly improving the known theoretical bounds on both the network width and the convergence time.
We show that convergence to a global minimum is guaranteed for networks with quadratic widths in the sample size and linear in their depth at a time logarithmic in both.
Our analysis and convergence bounds are derived via the construction of a surrogate network with fixed activation patterns that can be transformed at any time to an equivalent ReLU network of a reasonable size.
arXiv Detail & Related papers (2021-01-12T00:40:45Z) - A function space analysis of finite neural networks with insights from
sampling theory [41.07083436560303]
We show that the function space generated by multi-layer networks with non-expansive activation functions is smooth.
Under the assumption that the input is band-limited, we provide novel error bounds.
We analyze both deterministic uniform and random sampling showing the advantage of the former.
arXiv Detail & Related papers (2020-04-15T10:25:18Z) - Neural Operator: Graph Kernel Network for Partial Differential Equations [57.90284928158383]
This work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators)
We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators.
Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.
arXiv Detail & Related papers (2020-03-07T01:56:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.