Kantorovich-Type Stochastic Neural Network Operators for the Mean-Square Approximation of Certain Second-Order Stochastic Processes
- URL: http://arxiv.org/abs/2601.03634v1
- Date: Wed, 07 Jan 2026 06:25:40 GMT
- Title: Kantorovich-Type Stochastic Neural Network Operators for the Mean-Square Approximation of Certain Second-Order Stochastic Processes
- Authors: Sachin Saini, Uaday Singh,
- Abstract summary: We construct a new class of textbfKantorovich-type Neural Network Operators (K-SNNOs) in which randomness is incorporated not at the coefficient level, but through textbfstochastic neurons driven by neurons.<n>This framework enables the operator to inherit the probabilistic structure of the underlying process, making it suitable for modeling and approximating signals.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Artificial neural network operators (ANNOs) have been widely used for approximating deterministic input-output functions; however, their extension to random dynamics remains comparatively unexplored. In this paper, we construct a new class of \textbf{Kantorovich-type Stochastic Neural Network Operators (K-SNNOs)} in which randomness is incorporated not at the coefficient level, but through \textbf{stochastic neurons} driven by stochastic integrators. This framework enables the operator to inherit the probabilistic structure of the underlying process, making it suitable for modeling and approximating stochastic signals. We establish mean-square convergence of K-SNNOs to the target stochastic process and derive quantitative error estimates expressing the rate of approximation in terms of the modulus of continuity. Numerical simulations further validate the theoretical results by demonstrating accurate reconstruction of sample paths and rapid decay of the mean square error (MSE). Graphical results, including sample-wise approximations and empirical MSE behaviour, illustrate the robustness and effectiveness of the proposed stochastic-neuron-based operator.
Related papers
- Constructive Approximation of Random Process via Stochastic Interpolation Neural Network Operators [0.0]
We construct a class of neural network operators (SINNOs) with random coefficients activated by sigmoidal functions.<n>We establish their boundedness, accuracy, and approximation capabilities in the mean sense, in probability, as well as path-wise within the space of second-order (random) processes.<n>Results highlight the effectiveness of SINNOs for approximating processes with potential applications in COVID-19 case prediction.
arXiv Detail & Related papers (2025-12-30T09:30:18Z) - Interpretable Neural Approximation of Stochastic Reaction Dynamics with Guaranteed Reliability [4.736119820998459]
We introduce DeepSKA, a neural framework that achieves interpretability, guaranteed reliability, and substantial computational gains.<n>DeepSKA yields mathematically transparent representations that generalise across states, times, and output functions, and it integrates this structure with a small number of simulations to produce unbiased, provably convergent, and dramatically lower-magnitude estimates than classical Monte Carlo.
arXiv Detail & Related papers (2025-12-06T04:45:31Z) - A Simple Approximate Bayesian Inference Neural Surrogate for Stochastic Petri Net Models [0.0]
We introduce a neural-network-based approximation of the posterior distribution framework.<n>Our model employs a lightweight 1D Convolutional Residual Network trained end-to-end on Gillespie-simulated SPN realizations.<n>On synthetic SPNs with 20% missing events, our surrogate recovers rate-function coefficients with an RMSE = 0.108 and substantially runs faster than traditional Bayesian approaches.
arXiv Detail & Related papers (2025-07-14T18:31:19Z) - Stochastic Fractional Neural Operators: A Symmetrized Approach to Modeling Turbulence in Complex Fluid Dynamics [0.0]
We introduce a new class of neural network operators designed to handle problems where memory effects and randomness play a central role.<n>Our approach provides theoretical guarantees for the approximation quality and suggests that these neural operators can serve as effective tools in the analysis and simulation of complex systems.
arXiv Detail & Related papers (2025-05-12T13:11:08Z) - Probabilistic neural operators for functional uncertainty quantification [14.08907045605149]
We introduce the probabilistic neural operator (PNO), a framework for learning probability distributions over the output function space of neural operators.<n>PNO extends neural operators with generative modeling based on strictly proper scoring rules, integrating uncertainty information directly into the training process.
arXiv Detail & Related papers (2025-02-18T14:42:11Z) - Variational Sampling of Temporal Trajectories [39.22854981703244]
We introduce a mechanism to learn the distribution of trajectories by parameterizing the transition function $f$ explicitly as an element in a function space.
Our framework allows efficient synthesis of novel trajectories, while also directly providing a convenient tool for inference.
arXiv Detail & Related papers (2024-03-18T02:12:12Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Adversarially Contrastive Estimation of Conditional Neural Processes [39.77675002999259]
Conditional Neural Processes(CNPs) formulate distributions over functions and generate function observations with exact conditional likelihoods.
We propose calibrating CNPs with an adversarial training scheme besides regular maximum likelihood estimates.
From generative function reconstruction to downstream regression and classification tasks, we demonstrate that our method fits mainstream CNP members.
arXiv Detail & Related papers (2023-03-23T02:58:14Z) - PAC-Bayesian Learning of Aggregated Binary Activated Neural Networks
with Probabilities over Representations [2.047424180164312]
We study the expectation of a probabilistic neural network as a predictor by itself, focusing on the aggregation of binary activated neural networks with normal distributions over real-valued weights.
We show that the exact computation remains tractable for deep but narrow neural networks, thanks to a dynamic programming approach.
arXiv Detail & Related papers (2021-10-28T14:11:07Z) - A Stochastic Newton Algorithm for Distributed Convex Optimization [62.20732134991661]
We analyze a Newton algorithm for homogeneous distributed convex optimization, where each machine can calculate gradients of the same population objective.
We show that our method can reduce the number, and frequency, of required communication rounds compared to existing methods without hurting performance.
arXiv Detail & Related papers (2021-10-07T17:51:10Z) - Multi-Sample Online Learning for Spiking Neural Networks based on
Generalized Expectation Maximization [42.125394498649015]
Spiking Neural Networks (SNNs) capture some of the efficiency of biological brains by processing through binary neural dynamic activations.
This paper proposes to leverage multiple compartments that sample independent spiking signals while sharing synaptic weights.
The key idea is to use these signals to obtain more accurate statistical estimates of the log-likelihood training criterion, as well as of its gradient.
arXiv Detail & Related papers (2021-02-05T16:39:42Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.