Adaptive deep density approximation for Fokker-Planck equations
- URL: http://arxiv.org/abs/2103.11181v1
- Date: Sat, 20 Mar 2021 13:49:52 GMT
- Title: Adaptive deep density approximation for Fokker-Planck equations
- Authors: Kejun Tang, Xiaoliang Wan, Qifeng Liao
- Abstract summary: We present a novel deep density approximation strategy based on KRnet (ADDAKR) for solving the steady-state Fokker-Planck equation.
We show that KRnet can efficiently estimate general high-dimensional density functions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we present a novel adaptive deep density approximation strategy
based on KRnet (ADDA-KR) for solving the steady-state Fokker-Planck equation.
It is known that this equation typically has high-dimensional spatial variables
posed on unbounded domains, which limit the application of traditional grid
based numerical methods. With the Knothe-Rosenblatt rearrangement, our newly
proposed flow-based generative model, called KRnet, provides a family of
probability density functions to serve as effective solution candidates of the
Fokker-Planck equation, which have weaker dependence on dimensionality than
traditional computational approaches. To result in effective stochastic
collocation points for training KRnet, we develop an adaptive sampling
procedure, where samples are generated iteratively using KRnet at each
iteration. In addition, we give a detailed discussion of KRnet and show that it
can efficiently estimate general high-dimensional density functions. We present
a general mathematical framework of ADDA-KR, validate its accuracy and
demonstrate its efficiency with numerical experiments.
Related papers
- FlowKac: An Efficient Neural Fokker-Planck solver using Temporal Normalizing flows and the Feynman Kac-Formula [4.806505912512236]
FlowKac is a novel approach that reformulates the Fokker-Planck equation using the Feynman-Kac formula.
A key innovation of FlowKac lies in its adaptive sampling scheme which significantly reduces the computational complexity.
arXiv Detail & Related papers (2025-03-14T14:14:20Z) - Dynamical Measure Transport and Neural PDE Solvers for Sampling [77.38204731939273]
We tackle the task of sampling from a probability density as transporting a tractable density function to the target.
We employ physics-informed neural networks (PINNs) to approximate the respective partial differential equations (PDEs) solutions.
PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently.
arXiv Detail & Related papers (2024-07-10T17:39:50Z) - Adaptive deep density approximation for stochastic dynamical systems [0.5120567378386615]
A new temporal KRnet is proposed to approximate the probability density functions (PDFs) iteration of the state variables.
To efficiently train the tKRnet, an adaptive procedure is developed to generate collocation points for the corresponding residual loss function.
A temporal decomposition technique is also employed to improve the long-time integration.
arXiv Detail & Related papers (2024-05-05T04:29:22Z) - Bounded KRnet and its applications to density estimation and approximation [7.834363165328673]
In this paper, we develop an invertible mapping, called B-KRnet, on a bounded domain.
We apply it to density estimation/approximation for data or the solutions of PDEs such as the Fokker-Planck equation and the Keller-Segel equation.
arXiv Detail & Related papers (2023-05-15T23:12:15Z) - Self-reinforced polynomial approximation methods for concentrated
probability densities [1.5469452301122175]
Transport map methods offer a powerful statistical learning tool that can couple a target high-dimensional random variable with some reference random variable.
This paper presents new computational techniques for building the Knothe-Rosenblatt (KR) rearrangement based on general separable functions.
arXiv Detail & Related papers (2023-03-05T02:44:02Z) - D4FT: A Deep Learning Approach to Kohn-Sham Density Functional Theory [79.50644650795012]
We propose a deep learning approach to solve Kohn-Sham Density Functional Theory (KS-DFT)
We prove that such an approach has the same expressivity as the SCF method, yet reduces the computational complexity.
In addition, we show that our approach enables us to explore more complex neural-based wave functions.
arXiv Detail & Related papers (2023-03-01T10:38:10Z) - Probabilistic partition of unity networks for high-dimensional
regression problems [1.0227479910430863]
We explore the partition of unity network (PPOU-Net) model in the context of high-dimensional regression problems.
We propose a general framework focusing on adaptive dimensionality reduction.
The PPOU-Nets consistently outperform the baseline fully-connected neural networks of comparable sizes in numerical experiments.
arXiv Detail & Related papers (2022-10-06T06:01:36Z) - Distributed Sketching for Randomized Optimization: Exact
Characterization, Concentration and Lower Bounds [54.51566432934556]
We consider distributed optimization methods for problems where forming the Hessian is computationally challenging.
We leverage randomized sketches for reducing the problem dimensions as well as preserving privacy and improving straggler resilience in asynchronous distributed systems.
arXiv Detail & Related papers (2022-03-18T05:49:13Z) - Density-Based Clustering with Kernel Diffusion [59.4179549482505]
A naive density corresponding to the indicator function of a unit $d$-dimensional Euclidean ball is commonly used in density-based clustering algorithms.
We propose a new kernel diffusion density function, which is adaptive to data of varying local distributional characteristics and smoothness.
arXiv Detail & Related papers (2021-10-11T09:00:33Z) - Learning High-Dimensional Distributions with Latent Neural Fokker-Planck
Kernels [67.81799703916563]
We introduce new techniques to formulate the problem as solving Fokker-Planck equation in a lower-dimensional latent space.
Our proposed model consists of latent-distribution morphing, a generator and a parameterized Fokker-Planck kernel function.
arXiv Detail & Related papers (2021-05-10T17:42:01Z) - Bayesian Sparse learning with preconditioned stochastic gradient MCMC
and its applications [5.660384137948734]
The proposed algorithm converges to the correct distribution with a controllable bias under mild conditions.
We show that the proposed algorithm canally converge to the correct distribution with a controllable bias under mild conditions.
arXiv Detail & Related papers (2020-06-29T20:57:20Z) - Distributed Averaging Methods for Randomized Second Order Optimization [54.51566432934556]
We consider distributed optimization problems where forming the Hessian is computationally challenging and communication is a bottleneck.
We develop unbiased parameter averaging methods for randomized second order optimization that employ sampling and sketching of the Hessian.
We also extend the framework of second order averaging methods to introduce an unbiased distributed optimization framework for heterogeneous computing systems.
arXiv Detail & Related papers (2020-02-16T09:01:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.