Neural Brownian Motion
- URL: http://arxiv.org/abs/2507.14499v1
- Date: Sat, 19 Jul 2025 06:09:52 GMT
- Title: Neural Brownian Motion
- Authors: Qian Qi,
- Abstract summary: This paper introduces the Neural-Brownian Motion (NBM), a new class of processes for modeling dynamics under learned uncertainty.<n>The volatility function $nu_theta$ is not postulated a priori but is implicitly defined by the constraint $g_theta(t, M_t, nu_theta(t, M_t)) = 0.<n>The character of this measure whether pessimistic or optimistic is endogenously determined by the learned $theta, providing a rigorous foundation for models where the attitude towards uncertainty is a discoverable feature.
- Score: 2.1756081703276
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces the Neural-Brownian Motion (NBM), a new class of stochastic processes for modeling dynamics under learned uncertainty. The NBM is defined axiomatically by replacing the classical martingale property with respect to linear expectation with one relative to a non-linear Neural Expectation Operator, $\varepsilon^\theta$, generated by a Backward Stochastic Differential Equation (BSDE) whose driver $f_\theta$ is parameterized by a neural network. Our main result is a representation theorem for a canonical NBM, which we define as a continuous $\varepsilon^\theta$-martingale with zero drift under the physical measure. We prove that, under a key structural assumption on the driver, such a canonical NBM exists and is the unique strong solution to a stochastic differential equation of the form ${\rm d} M_t = \nu_\theta(t, M_t) {\rm d} W_t$. Crucially, the volatility function $\nu_\theta$ is not postulated a priori but is implicitly defined by the algebraic constraint $g_\theta(t, M_t, \nu_\theta(t, M_t)) = 0$, where $g_\theta$ is a specialization of the BSDE driver. We develop the stochastic calculus for this process and prove a Girsanov-type theorem for the quadratic case, showing that an NBM acquires a drift under a new, learned measure. The character of this measure, whether pessimistic or optimistic, is endogenously determined by the learned parameters $\theta$, providing a rigorous foundation for models where the attitude towards uncertainty is a discoverable feature.
Related papers
- A Mean-Field Theory of $Θ$-Expectations [2.1756081703276]
We develop a new class of calculus for such non-linear models.<n>Theta-Expectation is shown to be consistent with the axiom of subaddivity.
arXiv Detail & Related papers (2025-07-30T11:08:56Z) - A Theory of $θ$-Expectations [2.1756081703276]
We develop a framework for a class of differential equations where the driver is a pointwise geometry.<n>The system's tractability is predicated on a global existence of a unique and globally globally.<n> Lipschitz maximizer map for the driver function.
arXiv Detail & Related papers (2025-07-27T16:56:01Z) - Neural Expectation Operators [2.1756081703276]
This paper introduces textbfMeasure Learning, a paradigm for modeling ambiguity via non-linear expectations.<n>We define Neural Expectation Operators as solutions to Backward Differential Equations (BSDEs) whose drivers are parameterized by neural networks.<n>We provide constructive methods for enforcing key axiomatic properties, such as convexity, by architectural design.
arXiv Detail & Related papers (2025-07-13T06:19:28Z) - Neural Networks for Tamed Milstein Approximation of SDEs with Additive Symmetric Jump Noise Driven by a Poisson Random Measure [2.845817138242963]
We propose a framework that integrates the Tamed-Milstein scheme with neural networks employed as non-parametric function approximators.<n>The proposed methodology constitutes a flexible alternative for inference in systems with state-dependent noise and discontinuities driven by L'evy processes.
arXiv Detail & Related papers (2025-07-06T15:13:31Z) - $p$-Adic Polynomial Regression as Alternative to Neural Network for Approximating $p$-Adic Functions of Many Variables [55.2480439325792]
A regression model is constructed that allows approximating continuous functions with any degree of accuracy.<n>The proposed model can be considered as a simple alternative to possible $p$-adic models based on neural network architecture.
arXiv Detail & Related papers (2025-03-30T15:42:08Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - Effective Minkowski Dimension of Deep Nonparametric Regression: Function
Approximation and Statistical Theories [70.90012822736988]
Existing theories on deep nonparametric regression have shown that when the input data lie on a low-dimensional manifold, deep neural networks can adapt to intrinsic data structures.
This paper introduces a relaxed assumption that input data are concentrated around a subset of $mathbbRd$ denoted by $mathcalS$, and the intrinsic dimension $mathcalS$ can be characterized by a new complexity notation -- effective Minkowski dimension.
arXiv Detail & Related papers (2023-06-26T17:13:31Z) - Neural Implicit Manifold Learning for Topology-Aware Density Estimation [15.878635603835063]
Current generative models learn $mathcalM$ by mapping an $m$-dimensional latent variable through a neural network.
We show that our model can learn manifold-supported distributions with complex topologies more accurately than pushforward models.
arXiv Detail & Related papers (2022-06-22T18:00:00Z) - $O(N^2)$ Universal Antisymmetry in Fermionic Neural Networks [107.86545461433616]
We propose permutation-equivariant architectures, on which a determinant Slater is applied to induce antisymmetry.
FermiNet is proved to have universal approximation capability with a single determinant, namely, it suffices to represent any antisymmetric function.
We substitute the Slater with a pairwise antisymmetry construction, which is easy to implement and can reduce the computational cost to $O(N2)$.
arXiv Detail & Related papers (2022-05-26T07:44:54Z) - Single Trajectory Nonparametric Learning of Nonlinear Dynamics [8.438421942654292]
Given a single trajectory of a dynamical system, we analyze the performance of the nonparametric least squares estimator (LSE)
We leverage recently developed information-theoretic methods to establish the optimality of the LSE for non hypotheses classes.
We specialize our results to a number of scenarios of practical interest, such as Lipschitz dynamics, generalized linear models, and dynamics described by functions in certain classes of Reproducing Kernel Hilbert Spaces (RKHS)
arXiv Detail & Related papers (2022-02-16T19:38:54Z) - Implicit Bias of MSE Gradient Optimization in Underparameterized Neural
Networks [0.0]
We study the dynamics of a neural network in function space when optimizing the mean squared error via gradient flow.
We show that the network learns eigenfunctions of an integral operator $T_Kinfty$ determined by the Neural Tangent Kernel (NTK)
We conclude that damped deviations offers a simple and unifying perspective of the dynamics when optimizing the squared error.
arXiv Detail & Related papers (2022-01-12T23:28:41Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.