Convergence Analysis of Max-Min Exponential Neural Network Operators in Orlicz Space
- URL: http://arxiv.org/abs/2508.10248v1
- Date: Thu, 14 Aug 2025 00:30:56 GMT
- Title: Convergence Analysis of Max-Min Exponential Neural Network Operators in Orlicz Space
- Authors: Satyaranjan Pradhan, Madan Mohan Soren,
- Abstract summary: We propose a Max Min approach for approximating functions using exponential neural network operators.<n>We study both pointwise and uniform convergence for univariate functions.<n>We provide some graphical representations to illustrate the approximation error of the function through suitable kernel and sigmoidal activation functions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this current work, we propose a Max Min approach for approximating functions using exponential neural network operators. We extend this framework to develop the Max Min Kantorovich-type exponential neural network operators and investigate their approximation properties. We study both pointwise and uniform convergence for univariate functions. To analyze the order of convergence, we use the logarithmic modulus of continuity and estimate the corresponding rate of convergence. Furthermore, we examine the convergence behavior of the Max Min Kantorovich type exponential neural network operators within the Orlicz space setting. We provide some graphical representations to illustrate the approximation error of the function through suitable kernel and sigmoidal activation functions.
Related papers
- Max-Min Neural Network Operators For Approximation of Multivariate Functions [4.5657111459153885]
We develop a framework for approximation by max-min neural network operators.<n>We establish pointwise and uniform convergence theorems and derive estimates for the order of approximation.
arXiv Detail & Related papers (2026-01-12T06:14:05Z) - Approximation Capabilities of Feedforward Neural Networks with GELU Activations [6.488575826304024]
We derive an approximation error bound that holds simultaneously for a function and all its derivatives up to any prescribed order.<n>The bounds apply to elementary functions, including multivariates, the exponential function, and the reciprocal function.<n>We report the network size, weight magnitudes, and behavior at infinity.
arXiv Detail & Related papers (2025-12-25T17:56:44Z) - Approximation Rates of Shallow Neural Networks: Barron Spaces, Activation Functions and Optimality Analysis [7.106210679849991]
It focuses on the dependence of the approximation rate on the dimension and the smoothness of the function being approximated within the Barron function space.<n>We establish optimal approximation rates in various norms for functions in Barron spaces and Sobolev spaces, confirming the curse of dimensionality.
arXiv Detail & Related papers (2025-10-21T08:08:35Z) - Revolutionizing Fractional Calculus with Neural Networks: Voronovskaya-Damasclin Theory for Next-Generation AI Systems [0.0]
This work introduces rigorous convergence rates for neural network operators activated by symmetrized and hyperbolic perturbed functions.<n>We extend classical approximation theory to fractional calculus via Caputo derivatives.
arXiv Detail & Related papers (2025-04-01T21:03:00Z) - Extension of Symmetrized Neural Network Operators with Fractional and Mixed Activation Functions [0.0]
We propose a novel extension to symmetrized neural network operators by incorporating fractional and mixed activation functions.<n>Our framework introduces a fractional exponent in the activation functions, allowing adaptive non-linear approximations with improved accuracy.
arXiv Detail & Related papers (2025-01-17T14:24:25Z) - Operator Learning of Lipschitz Operators: An Information-Theoretic Perspective [2.375038919274297]
This work addresses the complexity of neural operator approximations for the general class of Lipschitz continuous operators.
Our main contribution establishes lower bounds on the metric entropy of Lipschitz operators in two approximation settings.
It is shown that, regardless of the activation function used, neural operator architectures attaining an approximation accuracy $epsilon$ must have a size that is exponentially large in $epsilon-1$.
arXiv Detail & Related papers (2024-06-26T23:36:46Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Convex Bounds on the Softmax Function with Applications to Robustness
Verification [69.09991317119679]
The softmax function is a ubiquitous component at the output of neural networks and increasingly in intermediate layers as well.
This paper provides convex lower bounds and concave upper bounds on the softmax function, which are compatible with convex optimization formulations for characterizing neural networks and other ML models.
arXiv Detail & Related papers (2023-03-03T05:07:02Z) - Convex Analysis of the Mean Field Langevin Dynamics [49.66486092259375]
convergence rate analysis of the mean field Langevin dynamics is presented.
$p_q$ associated with the dynamics allows us to develop a convergence theory parallel to classical results in convex optimization.
arXiv Detail & Related papers (2022-01-25T17:13:56Z) - The Representation Power of Neural Networks: Breaking the Curse of
Dimensionality [0.0]
We prove upper bounds on quantities for shallow and deep neural networks.
We further prove that these bounds nearly match the minimal number of parameters any continuous function approximator needs to approximate Korobov functions.
arXiv Detail & Related papers (2020-12-10T04:44:07Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Space of Functions Computed by Deep-Layered Machines [74.13735716675987]
We study the space of functions computed by random-layered machines, including deep neural networks and Boolean circuits.
Investigating the distribution of Boolean functions computed on the recurrent and layer-dependent architectures, we find that it is the same in both models.
arXiv Detail & Related papers (2020-04-19T18:31:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.