Constructive Approximation of Random Process via Stochastic Interpolation Neural Network Operators
- URL: http://arxiv.org/abs/2512.24106v1
- Date: Tue, 30 Dec 2025 09:30:18 GMT
- Title: Constructive Approximation of Random Process via Stochastic Interpolation Neural Network Operators
- Authors: Sachin Saini, Uaday Singh,
- Abstract summary: We construct a class of neural network operators (SINNOs) with random coefficients activated by sigmoidal functions.<n>We establish their boundedness, accuracy, and approximation capabilities in the mean sense, in probability, as well as path-wise within the space of second-order (random) processes.<n>Results highlight the effectiveness of SINNOs for approximating processes with potential applications in COVID-19 case prediction.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we construct a class of stochastic interpolation neural network operators (SINNOs) with random coefficients activated by sigmoidal functions. We establish their boundedness, interpolation accuracy, and approximation capabilities in the mean square sense, in probability, as well as path-wise within the space of second-order stochastic (random) processes \( L^2(Ω, \mathcal{F},\mathbb{P}) \). Additionally, we provide quantitative error estimates using the modulus of continuity of the processes. These results highlight the effectiveness of SINNOs for approximating stochastic processes with potential applications in COVID-19 case prediction.
Related papers
- Kantorovich-Type Stochastic Neural Network Operators for the Mean-Square Approximation of Certain Second-Order Stochastic Processes [0.0]
We construct a new class of textbfKantorovich-type Neural Network Operators (K-SNNOs) in which randomness is incorporated not at the coefficient level, but through textbfstochastic neurons driven by neurons.<n>This framework enables the operator to inherit the probabilistic structure of the underlying process, making it suitable for modeling and approximating signals.
arXiv Detail & Related papers (2026-01-07T06:25:40Z) - Approximate Bayesian Inference via Bitstring Representations [15.754246455963907]
We propose performing probabilistic inference in the quantized, discrete parameter space created by representations.<n>This work advances scalable, interpretable machine learning by utilizing discrete approximations for probabilistic computations.
arXiv Detail & Related papers (2025-08-19T08:08:17Z) - Stochastic Process Learning via Operator Flow Matching [10.549183548149196]
We develop operator flow matching (OFM) for learning process priors on function spaces.<n>OFM provides the probability density of the values of any collection of points.<n>Our method outperforms state-of-the-art models in process learning, functional regression, and prior learning.
arXiv Detail & Related papers (2025-01-07T20:12:56Z) - Finite Neural Networks as Mixtures of Gaussian Processes: From Provable Error Bounds to Prior Selection [12.605866515202948]
We present an algorithmic framework to approximate a neural network of finite width and depth.<n>We iteratively approximate the output distribution of each layer of the neural network as a mixture of Gaussian processes.<n>Our results can represent an important step towards understanding neural network predictions.
arXiv Detail & Related papers (2024-07-26T12:45:53Z) - Entropic Matching for Expectation Propagation of Markov Jump Processes [31.376561087029454]
We propose a novel, tractable latent inference scheme for Markov jump processes.<n>Our approach is based on an entropic matching framework that can be embedded into the well-known expectation propagation algorithm.
arXiv Detail & Related papers (2023-09-27T12:07:21Z) - Noise-Free Sampling Algorithms via Regularized Wasserstein Proximals [3.4240632942024685]
We consider the problem of sampling from a distribution governed by a potential function.
This work proposes an explicit score based MCMC method that is deterministic, resulting in a deterministic evolution for particles.
arXiv Detail & Related papers (2023-08-28T23:51:33Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Interacting Particle Langevin Algorithm for Maximum Marginal Likelihood Estimation [2.365116842280503]
We develop a class of interacting particle systems for implementing a maximum marginal likelihood estimation procedure.<n>In particular, we prove that the parameter marginal of the stationary measure of this diffusion has the form of a Gibbs measure.<n>Using a particular rescaling, we then prove geometric ergodicity of this system and bound the discretisation error.<n>in a manner that is uniform in time and does not increase with the number of particles.
arXiv Detail & Related papers (2023-03-23T16:50:08Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.<n>We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.<n>Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - A Stochastic Newton Algorithm for Distributed Convex Optimization [62.20732134991661]
We analyze a Newton algorithm for homogeneous distributed convex optimization, where each machine can calculate gradients of the same population objective.
We show that our method can reduce the number, and frequency, of required communication rounds compared to existing methods without hurting performance.
arXiv Detail & Related papers (2021-10-07T17:51:10Z) - The Connection between Discrete- and Continuous-Time Descriptions of
Gaussian Continuous Processes [60.35125735474386]
We show that discretizations yielding consistent estimators have the property of invariance under coarse-graining'
This result explains why combining differencing schemes for derivatives reconstruction and local-in-time inference approaches does not work for time series analysis of second or higher order differential equations.
arXiv Detail & Related papers (2021-01-16T17:11:02Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z) - Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation [72.40827239394565]
We propose to compute features only at sparsely sampled locations.
We then densely reconstruct the feature map with an efficient procedure.
The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.
arXiv Detail & Related papers (2020-03-19T15:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.