HGAN-SDEs: Learning Neural Stochastic Differential Equations with Hermite-Guided Adversarial Training
- URL: http://arxiv.org/abs/2512.20272v1
- Date: Tue, 23 Dec 2025 11:25:22 GMT
- Title: HGAN-SDEs: Learning Neural Stochastic Differential Equations with Hermite-Guided Adversarial Training
- Authors: Yuanjian Xu, Yuan Shuai, Jianing Hao, Guang Zhang,
- Abstract summary: We introduce HGAN-SDEs, a novel GAN-based framework that leverages Neural Hermite functions to construct a structured and efficient discriminator.<n>HGAN-SDEs achieve superior sample quality and learning efficiency compared to existing generative models for SDEs.
- Score: 3.4515388499147654
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Neural Stochastic Differential Equations (Neural SDEs) provide a principled framework for modeling continuous-time stochastic processes and have been widely adopted in fields ranging from physics to finance. Recent advances suggest that Generative Adversarial Networks (GANs) offer a promising solution to learning the complex path distributions induced by SDEs. However, a critical bottleneck lies in designing a discriminator that faithfully captures temporal dependencies while remaining computationally efficient. Prior works have explored Neural Controlled Differential Equations (CDEs) as discriminators due to their ability to model continuous-time dynamics, but such architectures suffer from high computational costs and exacerbate the instability of adversarial training. To address these limitations, we introduce HGAN-SDEs, a novel GAN-based framework that leverages Neural Hermite functions to construct a structured and efficient discriminator. Hermite functions provide an expressive yet lightweight basis for approximating path-level dynamics, enabling both reduced runtime complexity and improved training stability. We establish the universal approximation property of our framework for a broad class of SDE-driven distributions and theoretically characterize its convergence behavior. Extensive empirical evaluations on synthetic and real-world systems demonstrate that HGAN-SDEs achieve superior sample quality and learning efficiency compared to existing generative models for SDEs
Related papers
- Stochastic Deep Learning: A Probabilistic Framework for Modeling Uncertainty in Structured Temporal Data [0.0]
I propose a novel framework that integrates differential equations (SDEs) with deep generative models to improve uncertainty in machine learning applications involving structured and temporal data.<n>This approach, termed Latent Differential Inference (SLDI), embeds an It SDE in the latent space of a variational autoencoder.<n>The drift and diffusion terms of the SDE are parameterized by neural networks, enabling data-driven inference and generalizing classical time series models to handle irregular sampling and complex dynamic structure.
arXiv Detail & Related papers (2026-01-08T18:53:59Z) - Expanding the Chaos: Neural Operator for Stochastic (Partial) Differential Equations [65.80144621950981]
We build on Wiener chaos expansions (WCE) to design neural operator (NO) architectures for SPDEs and SDEs.<n>We show that WCE-based neural operators provide a practical and scalable way to learn SDE/SPDE solution operators.
arXiv Detail & Related papers (2026-01-03T00:59:25Z) - A joint optimization approach to identifying sparse dynamics using least squares kernel collocation [70.13783231186183]
We develop an all-at-once modeling framework for learning systems of ordinary differential equations (ODE) from scarce, partial, and noisy observations of the states.<n>The proposed methodology amounts to a combination of sparse recovery strategies for the ODE over a function library combined with techniques from reproducing kernel Hilbert space (RKHS) theory for estimating the state and discretizing the ODE.
arXiv Detail & Related papers (2025-11-23T18:04:15Z) - SDE Matching: Scalable and Simulation-Free Training of Latent Stochastic Differential Equations [2.1779479916071067]
We propose SDE Matching, a new simulation-free method for training Latent SDEs.<n>Our results demonstrate that SDE Matching achieves performance comparable to adjoint sensitivity methods.
arXiv Detail & Related papers (2025-02-04T16:47:49Z) - Neural SDEs as a Unified Approach to Continuous-Domain Sequence Modeling [3.8980564330208662]
We propose a novel and intuitive approach to continuous sequence modeling.<n>Our method interprets time-series data as textitdiscrete samples from an underlying continuous dynamical system.<n>We derive a maximum principled objective and a textitsimulation-free scheme for efficient training of our Neural SDE model.
arXiv Detail & Related papers (2025-01-31T03:47:22Z) - Advancing Generalization in PINNs through Latent-Space Representations [71.86401914779019]
Physics-informed neural networks (PINNs) have made significant strides in modeling dynamical systems governed by partial differential equations (PDEs)<n>We propose PIDO, a novel physics-informed neural PDE solver designed to generalize effectively across diverse PDE configurations.<n>We validate PIDO on a range of benchmarks, including 1D combined equations and 2D Navier-Stokes equations.
arXiv Detail & Related papers (2024-11-28T13:16:20Z) - Non-adversarial training of Neural SDEs with signature kernel scores [4.721845865189578]
State-of-the-art performance for irregular time series generation has been previously obtained by training these models adversarially as GANs.
In this paper, we introduce a novel class of scoring rules on pathspace based on signature kernels.
arXiv Detail & Related papers (2023-05-25T17:31:18Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Learning differentiable solvers for systems with hard constraints [48.54197776363251]
We introduce a practical method to enforce partial differential equation (PDE) constraints for functions defined by neural networks (NNs)
We develop a differentiable PDE-constrained layer that can be incorporated into any NN architecture.
Our results show that incorporating hard constraints directly into the NN architecture achieves much lower test error when compared to training on an unconstrained objective.
arXiv Detail & Related papers (2022-07-18T15:11:43Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Accurate and Reliable Forecasting using Stochastic Differential
Equations [48.21369419647511]
It is critical yet challenging for deep learning models to properly characterize uncertainty that is pervasive in real-world environments.
This paper develops SDE-HNN to characterize the interaction between the predictive mean and variance of HNNs for accurate and reliable regression.
Experiments on the challenging datasets show that our method significantly outperforms the state-of-the-art baselines in terms of both predictive performance and uncertainty quantification.
arXiv Detail & Related papers (2021-03-28T04:18:11Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.