Revolutionizing Fractional Calculus with Neural Networks: Voronovskaya-Damasclin Theory for Next-Generation AI Systems
- URL: http://arxiv.org/abs/2504.03751v1
- Date: Tue, 01 Apr 2025 21:03:00 GMT
- Title: Revolutionizing Fractional Calculus with Neural Networks: Voronovskaya-Damasclin Theory for Next-Generation AI Systems
- Authors: Rômulo Damasclin Chaves dos Santos, Jorge Henrique de Oliveira Sales,
- Abstract summary: This work introduces rigorous convergence rates for neural network operators activated by symmetrized and hyperbolic perturbed functions.<n>We extend classical approximation theory to fractional calculus via Caputo derivatives.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work introduces rigorous convergence rates for neural network operators activated by symmetrized and perturbed hyperbolic tangent functions, utilizing novel Voronovskaya-Damasclin asymptotic expansions. We analyze basic, Kantorovich, and quadrature-type operators over infinite domains, extending classical approximation theory to fractional calculus via Caputo derivatives. Key innovations include parameterized activation functions with asymmetry control, symmetrized density operators, and fractional Taylor expansions for error analysis. The main theorem demonstrates that Kantorovich operators achieve \(o(n^{-\beta(N-\varepsilon)})\) convergence rates, while basic operators exhibit \(\mathcal{O}(n^{-\beta N})\) error decay. For deep networks, we prove \(\mathcal{O}(L^{-\beta(N-\varepsilon)})\) approximation bounds. Stability results under parameter perturbations highlight operator robustness. By integrating neural approximation theory with fractional calculus, this work provides foundational mathematical insights and deployable engineering solutions, with potential applications in complex system modeling and signal processing.
Related papers
- Extension of Symmetrized Neural Network Operators with Fractional and Mixed Activation Functions [0.0]
We propose a novel extension to symmetrized neural network operators by incorporating fractional and mixed activation functions.<n>Our framework introduces a fractional exponent in the activation functions, allowing adaptive non-linear approximations with improved accuracy.
arXiv Detail & Related papers (2025-01-17T14:24:25Z) - Approximation Rates in Fréchet Metrics: Barron Spaces, Paley-Wiener Spaces, and Fourier Multipliers [1.4732811715354452]
We study some general approximation capabilities for linear differential operators by approximating the corresponding symbol in the Fourier domain.<n>In that sense, we measure the approximation error in terms of a Fr'echet metric.<n>We then focus on a natural extension of our main theorem, in which we manage to reduce the assumptions on the sequence of semi-norms.
arXiv Detail & Related papers (2024-12-27T20:16:04Z) - Convergence analysis of wide shallow neural operators within the framework of Neural Tangent Kernel [4.313136216120379]
We conduct the convergence analysis of gradient descent for the wide shallow neural operators and physics-informed shallow neural operators within the framework of Neural Tangent Kernel (NTK)<n>Under the setting of over-parametrization, gradient descent can find the global minimum regardless of whether it is in continuous time or discrete time.
arXiv Detail & Related papers (2024-12-07T05:47:28Z) - A Mathematical Analysis of Neural Operator Behaviors [0.0]
This paper presents a rigorous framework for analyzing the behaviors of neural operators.
We focus on their stability, convergence, clustering dynamics, universality, and generalization error.
We aim to offer clear and unified guidance in a single setting for the future design of neural operator-based methods.
arXiv Detail & Related papers (2024-10-28T19:38:53Z) - DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [60.58067866537143]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.<n>To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.<n> Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - Neural Operator: Learning Maps Between Function Spaces [75.93843876663128]
We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces.
We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator.
An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations.
arXiv Detail & Related papers (2021-08-19T03:56:49Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Fourier Neural Operator for Parametric Partial Differential Equations [57.90284928158383]
We formulate a new neural operator by parameterizing the integral kernel directly in Fourier space.
We perform experiments on Burgers' equation, Darcy flow, and Navier-Stokes equation.
It is up to three orders of magnitude faster compared to traditional PDE solvers.
arXiv Detail & Related papers (2020-10-18T00:34:21Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.