Learning the conditional law: signatures and conditional GANs in
filtering and prediction of diffusion processes
- URL: http://arxiv.org/abs/2204.00611v1
- Date: Fri, 1 Apr 2022 17:56:54 GMT
- Title: Learning the conditional law: signatures and conditional GANs in
filtering and prediction of diffusion processes
- Authors: Fabian Germ, Marc Sabate-Vidales
- Abstract summary: We consider the filtering and prediction problem for a diffusion process.
The signal and observation are modeled by differential equations driven by Wiener processes.
We provide an approximation algorithm using conditional generative networks (GANs) and signatures.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the filtering and prediction problem for a diffusion process. The
signal and observation are modeled by stochastic differential equations (SDEs)
driven by Wiener processes. In classical estimation theory, measure-valued
stochastic partial differential equations (SPDEs) are derived for the filtering
and prediction measures. These equations can be hard to solve numerically. We
provide an approximation algorithm using conditional generative adversarial
networks (GANs) and signatures, an object from rough path theory. The signature
of a sufficiently smooth path determines the path completely. In some cases,
GANs based on signatures have been shown to efficiently approximate the law of
a stochastic process. In this paper we extend this method to approximate the
prediction measure conditional to noisy observation. We use controlled
differential equations (CDEs) as universal approximators to propose an
estimator for the conditional and prediction law. We show well-posedness in
providing a rigorous mathematical framework. Numerical results show the
efficiency of our algorithm.
Related papers
- Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Sampling via Gradient Flows in the Space of Probability Measures [10.892894776497165]
Recent work shows that algorithms derived by considering gradient flows in the space of probability measures open up new avenues for algorithm development.
This paper makes three contributions to this sampling approach by scrutinizing the design components of such gradient flows.
arXiv Detail & Related papers (2023-10-05T15:20:35Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - Mean-Square Analysis with An Application to Optimal Dimension Dependence
of Langevin Monte Carlo [60.785586069299356]
This work provides a general framework for the non-asymotic analysis of sampling error in 2-Wasserstein distance.
Our theoretical analysis is further validated by numerical experiments.
arXiv Detail & Related papers (2021-09-08T18:00:05Z) - Large-Scale Wasserstein Gradient Flows [84.73670288608025]
We introduce a scalable scheme to approximate Wasserstein gradient flows.
Our approach relies on input neural networks (ICNNs) to discretize the JKO steps.
As a result, we can sample from the measure at each step of the gradient diffusion and compute its density.
arXiv Detail & Related papers (2021-06-01T19:21:48Z) - Pathwise Conditioning of Gaussian Processes [72.61885354624604]
Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations.
This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector.
We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors.
arXiv Detail & Related papers (2020-11-08T17:09:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.