Adaptive deep density approximation for fractional Fokker-Planck
equations
- URL: http://arxiv.org/abs/2210.14402v1
- Date: Wed, 26 Oct 2022 00:58:17 GMT
- Title: Adaptive deep density approximation for fractional Fokker-Planck
equations
- Authors: Li Zeng, Xiaoliang Wan and Tao Zhou
- Abstract summary: We present an explicit PDF model induced by a flow-based deep generative model, KRnet, which constructs a transport map from a simple distribution to the target distribution.
We consider two methods to approximate the fractional Laplacian.
Based on these two different ways for the approximation of the fractional Laplacian, we propose two models, MCNF and GRBFNF, to approximate stationary FPEs and time-dependent FPEs.
- Score: 6.066542157374599
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we propose adaptive deep learning approaches based on
normalizing flows for solving fractional Fokker-Planck equations (FPEs). The
solution of a FPE is a probability density function (PDF). Traditional
mesh-based methods are ineffective because of the unbounded computation domain,
a large number of dimensions and the nonlocal fractional operator. To this end,
we represent the solution with an explicit PDF model induced by a flow-based
deep generative model, simplified KRnet, which constructs a transport map from
a simple distribution to the target distribution. We consider two methods to
approximate the fractional Laplacian. One method is the Monte Carlo
approximation. The other method is to construct an auxiliary model with
Gaussian radial basis functions (GRBFs) to approximate the solution such that
we may take advantage of the fact that the fractional Laplacian of a Gaussian
is known analytically. Based on these two different ways for the approximation
of the fractional Laplacian, we propose two models, MCNF and GRBFNF, to
approximate stationary FPEs and MCTNF to approximate time-dependent FPEs. To
further improve the accuracy, we refine the training set and the approximate
solution alternately. A variety of numerical examples is presented to
demonstrate the effectiveness of our adaptive deep density approaches.
Related papers
- A convergent scheme for the Bayesian filtering problem based on the Fokker--Planck equation and deep splitting [0.0]
A numerical scheme for approximating the nonlinear filtering density is introduced and its convergence rate is established.
For the prediction step, the scheme approximates the Fokker--Planck equation with a deep splitting scheme, and performs an exact update through Bayes' formula.
This results in a classical prediction-update filtering algorithm that operates online for new observation sequences post-training.
arXiv Detail & Related papers (2024-09-22T20:25:45Z) - Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Flow-based Distributionally Robust Optimization [23.232731771848883]
We present a framework, called $textttFlowDRO$, for solving flow-based distributionally robust optimization (DRO) problems with Wasserstein uncertainty sets.
We aim to find continuous worst-case distribution (also called the Least Favorable Distribution, LFD) and sample from it.
We demonstrate its usage in adversarial learning, distributionally robust hypothesis testing, and a new mechanism for data-driven distribution perturbation differential privacy.
arXiv Detail & Related papers (2023-10-30T03:53:31Z) - Adaptive importance sampling for Deep Ritz [7.123920027048777]
We introduce an adaptive sampling method for the Deep Ritz method aimed at solving partial differential equations (PDEs)
One network is employed to approximate the solution of PDEs, while the other one is a deep generative model used to generate new collocation points to refine the training set.
Compared to the original Deep Ritz method, the proposed adaptive method improves accuracy, especially for problems characterized by low regularity and high dimensionality.
arXiv Detail & Related papers (2023-10-26T06:35:08Z) - Moreau Envelope ADMM for Decentralized Weakly Convex Optimization [55.2289666758254]
This paper proposes a proximal variant of the alternating direction method of multipliers (ADMM) for distributed optimization.
The results of our numerical experiments indicate that our method is faster and more robust than widely-used approaches.
arXiv Detail & Related papers (2023-08-31T14:16:30Z) - DF2: Distribution-Free Decision-Focused Learning [53.2476224456902]
Decision-focused learning (DFL) has recently emerged as a powerful approach for predictthen-optimize problems.
Existing end-to-end DFL methods are hindered by three significant bottlenecks: model error, sample average approximation error, and distribution-based parameterization of the expected objective.
We present DF2 -- the first textit-free decision-focused learning method explicitly designed to address these three bottlenecks.
arXiv Detail & Related papers (2023-08-11T00:44:46Z) - Sobolev Space Regularised Pre Density Models [51.558848491038916]
We propose a new approach to non-parametric density estimation that is based on regularizing a Sobolev norm of the density.
This method is statistically consistent, and makes the inductive validation model clear and consistent.
arXiv Detail & Related papers (2023-07-25T18:47:53Z) - Mean-field Variational Inference via Wasserstein Gradient Flow [8.05603983337769]
Variational inference, such as the mean-field (MF) approximation, requires certain conjugacy structures for efficient computation.
We introduce a general computational framework to implement MFal inference for Bayesian models, with or without latent variables, using the Wasserstein gradient flow (WGF)
We propose a new constraint-free function approximation method using neural networks to numerically realize our algorithm.
arXiv Detail & Related papers (2022-07-17T04:05:32Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Mean-Field Approximation to Gaussian-Softmax Integral with Application
to Uncertainty Estimation [23.38076756988258]
We propose a new single-model based approach to quantify uncertainty in deep neural networks.
We use a mean-field approximation formula to compute an analytically intractable integral.
Empirically, the proposed approach performs competitively when compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-06-13T07:32:38Z) - Distributed Averaging Methods for Randomized Second Order Optimization [54.51566432934556]
We consider distributed optimization problems where forming the Hessian is computationally challenging and communication is a bottleneck.
We develop unbiased parameter averaging methods for randomized second order optimization that employ sampling and sketching of the Hessian.
We also extend the framework of second order averaging methods to introduce an unbiased distributed optimization framework for heterogeneous computing systems.
arXiv Detail & Related papers (2020-02-16T09:01:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.