Max-Min Neural Network Operators For Approximation of Multivariate Functions
- URL: http://arxiv.org/abs/2601.07886v1
- Date: Mon, 12 Jan 2026 06:14:05 GMT
- Title: Max-Min Neural Network Operators For Approximation of Multivariate Functions
- Authors: Abhishek Yadav, Uaday Singh, Feng Dai,
- Abstract summary: We develop a framework for approximation by max-min neural network operators.<n>We establish pointwise and uniform convergence theorems and derive estimates for the order of approximation.
- Score: 4.5657111459153885
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we develop a multivariate framework for approximation by max-min neural network operators. Building on the recent advances in approximation theory by neural network operators, particularly, the univariate max-min operators, we propose and analyze new multivariate operators activated by sigmoidal functions. We establish pointwise and uniform convergence theorems and derive quantitative estimates for the order of approximation via modulus of continuity and multivariate generalized absolute moment. Our results demonstrate that multivariate max-min structure of operators, besides their algebraic elegance, provide efficient and stable approximation tools in both theoretical and applied settings.
Related papers
- Approximation Capabilities of Feedforward Neural Networks with GELU Activations [6.488575826304024]
We derive an approximation error bound that holds simultaneously for a function and all its derivatives up to any prescribed order.<n>The bounds apply to elementary functions, including multivariates, the exponential function, and the reciprocal function.<n>We report the network size, weight magnitudes, and behavior at infinity.
arXiv Detail & Related papers (2025-12-25T17:56:44Z) - A Deep Learning Framework for Multi-Operator Learning: Architectures and Approximation Theory [2.2731895181875346]
We study the problem of learning collections of operators and provide both theoretical and empirical advances.<n>We distinguish between two regimes: (i) multiple operator learning, where a single network represents a continuum of operators parameterized by a parametric function, and (ii) learning several distinct single operators, where each operator is learned independently.<n>Overall, this work establishes a unified theoretical and practical foundation for scalable operator learning across multiple operators.
arXiv Detail & Related papers (2025-10-29T10:52:02Z) - Neural Optimal Transport Meets Multivariate Conformal Prediction [58.43397908730771]
We propose a framework for conditional vectorile regression (CVQR)<n>CVQR combines neural optimal transport with quantized optimization, and apply it to predictions.
arXiv Detail & Related papers (2025-09-29T19:50:19Z) - Convergence Analysis of Max-Min Exponential Neural Network Operators in Orlicz Space [0.0]
We propose a Max Min approach for approximating functions using exponential neural network operators.<n>We study both pointwise and uniform convergence for univariate functions.<n>We provide some graphical representations to illustrate the approximation error of the function through suitable kernel and sigmoidal activation functions.
arXiv Detail & Related papers (2025-08-14T00:30:56Z) - Extension of Symmetrized Neural Network Operators with Fractional and Mixed Activation Functions [0.0]
We propose a novel extension to symmetrized neural network operators by incorporating fractional and mixed activation functions.<n>Our framework introduces a fractional exponent in the activation functions, allowing adaptive non-linear approximations with improved accuracy.
arXiv Detail & Related papers (2025-01-17T14:24:25Z) - Statistical Inference for Temporal Difference Learning with Linear Function Approximation [55.80276145563105]
We investigate the statistical properties of Temporal Difference learning with Polyak-Ruppert averaging.<n>We make three theoretical contributions that improve upon the current state-of-the-art results.
arXiv Detail & Related papers (2024-10-21T15:34:44Z) - Chebyshev approximation and composition of functions in matrix product states for quantum-inspired numerical analysis [0.0]
It proposes an algorithm that employs iterative Chebyshev expansions and Clenshaw evaluations to represent analytic and highly differentiable functions as MPS Chebyshev interpolants.<n>It demonstrates rapid convergence for highly-differentiable functions, aligning with theoretical predictions, and generalizes efficiently to multidimensional scenarios.
arXiv Detail & Related papers (2024-07-12T18:00:06Z) - Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs [93.82811501035569]
We introduce a new data efficient and highly parallelizable operator learning approach with reduced memory requirement and better generalization.
MG-TFNO scales to large resolutions by leveraging local and global structures of full-scale, real-world phenomena.
We demonstrate superior performance on the turbulent Navier-Stokes equations where we achieve less than half the error with over 150x compression.
arXiv Detail & Related papers (2023-09-29T20:18:52Z) - Neural Operator: Learning Maps Between Function Spaces [75.93843876663128]
We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces.
We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator.
An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations.
arXiv Detail & Related papers (2021-08-19T03:56:49Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.