Supervised Learning of Random Neural Architectures Structured by Latent Random Fields on Compact Boundaryless Multiply-Connected Manifolds
- URL: http://arxiv.org/abs/2512.10407v1
- Date: Thu, 11 Dec 2025 08:17:12 GMT
- Title: Supervised Learning of Random Neural Architectures Structured by Latent Random Fields on Compact Boundaryless Multiply-Connected Manifolds
- Authors: Christian Soize,
- Abstract summary: This paper introduces a new probabilistic framework for supervised learning in neural systems.<n>It is designed to model complex, uncertain random outputs whose outputs are strongly non-Gaussian given deterministic inputs.<n>The goal is to establish a novel conceptual and mathematical framework in which neural systems are realizations of a geometry-aware, field-driven generative process.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a new probabilistic framework for supervised learning in neural systems. It is designed to model complex, uncertain systems whose random outputs are strongly non-Gaussian given deterministic inputs. The architecture itself is a random object stochastically generated by a latent anisotropic Gaussian random field defined on a compact, boundaryless, multiply-connected manifold. The goal is to establish a novel conceptual and mathematical framework in which neural architectures are realizations of a geometry-aware, field-driven generative process. Both the neural topology and synaptic weights emerge jointly from a latent random field. A reduced-order parameterization governs the spatial intensity of an inhomogeneous Poisson process on the manifold, from which neuron locations are sampled. Input and output neurons are identified via extremal evaluations of the latent field, while connectivity is established through geodesic proximity and local field affinity. Synaptic weights are conditionally sampled from the field realization, inducing stochastic output responses even for deterministic inputs. To ensure scalability, the architecture is sparsified via percentile-based diffusion masking, yielding geometry-aware sparse connectivity without ad hoc structural assumptions. Supervised learning is formulated as inference on the generative hyperparameters of the latent field, using a negative log-likelihood loss estimated through Monte Carlo sampling from single-observation-per-input datasets. The paper initiates a mathematical analysis of the model, establishing foundational properties such as well-posedness, measurability, and a preliminary analysis of the expressive variability of the induced stochastic mappings, which support its internal coherence and lay the groundwork for a broader theory of geometry-driven stochastic learning.
Related papers
- Kantorovich-Type Stochastic Neural Network Operators for the Mean-Square Approximation of Certain Second-Order Stochastic Processes [0.0]
We construct a new class of textbfKantorovich-type Neural Network Operators (K-SNNOs) in which randomness is incorporated not at the coefficient level, but through textbfstochastic neurons driven by neurons.<n>This framework enables the operator to inherit the probabilistic structure of the underlying process, making it suitable for modeling and approximating signals.
arXiv Detail & Related papers (2026-01-07T06:25:40Z) - Spatially-informed transformers: Injecting geostatistical covariance biases into self-attention for spatio-temporal forecasting [0.0]
We propose a hybrid architecture that injects a geostatistic inductive bias directly into the decomposing self-attention mechanism via a learnable costatistics kernel.<n>We demonstrate the phenomenon of Deep Variography'', where the network successfully recovers the true spatial parameters of the underlying process end-to-end via backpropagation.
arXiv Detail & Related papers (2025-12-19T15:32:24Z) - Interpretable Neural Approximation of Stochastic Reaction Dynamics with Guaranteed Reliability [4.736119820998459]
We introduce DeepSKA, a neural framework that achieves interpretability, guaranteed reliability, and substantial computational gains.<n>DeepSKA yields mathematically transparent representations that generalise across states, times, and output functions, and it integrates this structure with a small number of simulations to produce unbiased, provably convergent, and dramatically lower-magnitude estimates than classical Monte Carlo.
arXiv Detail & Related papers (2025-12-06T04:45:31Z) - On the Probabilistic Learnability of Compact Neural Network Preimage Bounds [71.59148070050212]
In this work, we adopt a novel probabilistic perspective, aiming to deliver solutions with high-confidence guarantees and bounded error.<n>We introduce $textbfR$andom $textbfF$orest $textbfPro$perty $textbfVe$rifier ($texttRF-ProVe$), a method that exploits an ensemble of randomized decision trees to generate candidate input regions satisfying a desired output property.<n>Our theoretical derivations offer formal statistical guarantees on region purity and global coverage, providing a practical, scalable solution for computing
arXiv Detail & Related papers (2025-11-10T16:56:51Z) - Wasserstein Regression as a Variational Approximation of Probabilistic Trajectories through the Bernstein Basis [41.99844472131922]
Existing approaches often ignore the geometry of the probability space or are computationally expensive.<n>A new method is proposed that combines the parameterization of probability trajectories using a Bernstein basis and the minimization of the Wasserstein distance between distributions.<n>The developed approach combines geometric accuracy, computational practicality, and interpretability.
arXiv Detail & Related papers (2025-10-30T15:36:39Z) - Probabilistic Neural Networks (PNNs) for Modeling Aleatoric Uncertainty
in Scientific Machine Learning [2.348041867134616]
This paper investigates the use of probabilistic neural networks (PNNs) to model aleatoric uncertainty.
PNNs generate probability distributions for the target variable, allowing the determination of both predicted means and intervals in regression scenarios.
In a real-world scientific machine learning context, PNNs yield remarkably accurate output mean estimates with R-squared scores approaching 0.97, and their predicted intervals exhibit a high correlation coefficient of nearly 0.80.
arXiv Detail & Related papers (2024-02-21T17:15:47Z) - Universal approximation property of Banach space-valued random feature models including random neural networks [3.3379026542599934]
We introduce a Banach space-valued extension of random feature learning.
By randomly initializing the feature maps, only the linear readout needs to be trained.
We derive approximation rates and an explicit algorithm to learn an element of the given Banach space.
arXiv Detail & Related papers (2023-12-13T11:27:15Z) - Inferring Inference [7.11780383076327]
We develop a framework for inferring canonical distributed computations from large-scale neural activity patterns.
We simulate recordings for a model brain that implicitly implements an approximate inference algorithm on a probabilistic graphical model.
Overall, this framework provides a new tool for discovering interpretable structure in neural recordings.
arXiv Detail & Related papers (2023-10-04T22:12:11Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.