A diffusion-based generative model for financial time series via geometric Brownian motion
- URL: http://arxiv.org/abs/2507.19003v1
- Date: Fri, 25 Jul 2025 07:02:09 GMT
- Title: A diffusion-based generative model for financial time series via geometric Brownian motion
- Authors: Gihun Kim, Sun-Yong Choi, Yeoneung Kim,
- Abstract summary: We propose a novel diffusion-based generative framework for financial time series.<n>Our method injects noise proportionally to asset prices at each time step, reflecting the heteroskedasticity observed in financial time series.<n>By accurately balancing the drift and diffusion terms, we show that the resulting log-price process reduces to a variance-exploding differential equation.
- Score: 1.3654846342364308
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel diffusion-based generative framework for financial time series that incorporates geometric Brownian motion (GBM), the foundation of the Black--Scholes theory, into the forward noising process. Unlike standard score-based models that treat price trajectories as generic numerical sequences, our method injects noise proportionally to asset prices at each time step, reflecting the heteroskedasticity observed in financial time series. By accurately balancing the drift and diffusion terms, we show that the resulting log-price process reduces to a variance-exploding stochastic differential equation, aligning with the formulation in score-based generative models. The reverse-time generative process is trained via denoising score matching using a Transformer-based architecture adapted from the Conditional Score-based Diffusion Imputation (CSDI) framework. Empirical evaluations on historical stock data demonstrate that our model reproduces key stylized facts heavy-tailed return distributions, volatility clustering, and the leverage effect more realistically than conventional diffusion models.
Related papers
- Inference-Time Scaling of Diffusion Language Models with Particle Gibbs Sampling [62.640128548633946]
We introduce a novel inference-time scaling approach based on particle Gibbs sampling for discrete diffusion models.<n>Our method consistently outperforms prior inference-time strategies on reward-guided text generation tasks.
arXiv Detail & Related papers (2025-07-11T08:00:47Z) - Unifying Autoregressive and Diffusion-Based Sequence Generation [2.3923884480793673]
We present extensions to diffusion-based sequence generation models, blurring the line with autoregressive language models.<n>We introduce hyperschedules, which assign distinct noise schedules to individual token positions.<n>Second, we propose two hybrid token-wise noising processes that interpolate between absorbing and uniform processes, enabling the model to fix past mistakes.
arXiv Detail & Related papers (2025-04-08T20:32:10Z) - Simple and Critical Iterative Denoising: A Recasting of Discrete Diffusion in Graph Generation [0.0]
dependencies between intermediate noisy states lead to error accumulation and propagation during the reverse denoising process.<n>We propose a novel framework called Simple Iterative Denoising, which simplifies discrete diffusion and circumvents the issue.<n>Our empirical evaluations demonstrate that the proposed method significantly outperforms existing discrete diffusion baselines in graph generation tasks.
arXiv Detail & Related papers (2025-03-27T15:08:58Z) - Series-to-Series Diffusion Bridge Model [8.590453584544386]
We present a comprehensive framework that encompasses most existing diffusion-based methods.
We propose a novel diffusion-based time series forecasting model, the Series-to-Series Diffusion Bridge Model ($mathrmS2DBM$)
Experimental results demonstrate that $mathrmS2DBM$ delivers superior performance in point-to-point forecasting.
arXiv Detail & Related papers (2024-11-07T07:37:34Z) - On conditional diffusion models for PDE simulations [53.01911265639582]
We study score-based diffusion models for forecasting and assimilation of sparse observations.
We propose an autoregressive sampling approach that significantly improves performance in forecasting.
We also propose a new training strategy for conditional score-based models that achieves stable performance over a range of history lengths.
arXiv Detail & Related papers (2024-10-21T18:31:04Z) - Generative Fractional Diffusion Models [53.36835573822926]
We introduce the first continuous-time score-based generative model that leverages fractional diffusion processes for its underlying dynamics.
Our evaluations on real image datasets demonstrate that GFDM achieves greater pixel-wise diversity and enhanced image quality, as indicated by a lower FID.
arXiv Detail & Related papers (2023-10-26T17:53:24Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - Volatility Based Kernels and Moving Average Means for Accurate
Forecasting with Gaussian Processes [36.712632126776285]
We show how to re-cast a class of volatility models as a hierarchical Gaussian process (GP) model with specialized covariance functions.
Within this framework, we take inspiration from well studied domains to introduce a new class of models, Volt and Magpie, that significantly outperform baselines in stock and wind speed forecasting.
arXiv Detail & Related papers (2022-07-13T23:02:54Z) - How Much is Enough? A Study on Diffusion Times in Score-based Generative
Models [76.76860707897413]
Current best practice advocates for a large T to ensure that the forward dynamics brings the diffusion sufficiently close to a known and simple noise distribution.
We show how an auxiliary model can be used to bridge the gap between the ideal and the simulated forward dynamics, followed by a standard reverse diffusion process.
arXiv Detail & Related papers (2022-06-10T15:09:46Z) - Modeling Continuous Stochastic Processes with Dynamic Normalizing Flows [40.9137348900942]
We propose a novel type of flow driven by a differential deformation of the Wiener process.
As a result, we obtain a rich time series model whose observable process inherits many of the appealing properties of its base process.
arXiv Detail & Related papers (2020-02-24T20:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.