A Deep Learning Approach for the solution of Probability Density
Evolution of Stochastic Systems
- URL: http://arxiv.org/abs/2207.01907v1
- Date: Tue, 5 Jul 2022 09:37:48 GMT
- Title: A Deep Learning Approach for the solution of Probability Density
Evolution of Stochastic Systems
- Authors: Seid H. Pourtakdoust, Amir H. Khodabakhsh
- Abstract summary: DeepPDEM utilizes the concept of physics-informed networks to solve the evolution of the probability density.
DeepPDEM learns the General Density Evolution Equation (GDEE) of structures.
It can also serve as an efficient surrogate for the solution at any points within optimization schemes or real-time applica-tions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Derivation of the probability density evolution provides invaluable insight
into the behavior of many stochastic systems and their performance. However,
for most real-time applica-tions, numerical determination of the probability
density evolution is a formidable task. The latter is due to the required
temporal and spatial discretization schemes that render most computational
solutions prohibitive and impractical. In this respect, the development of an
efficient computational surrogate model is of paramount importance. Recent
studies on the physics-constrained networks show that a suitable surrogate can
be achieved by encoding the physical insight into a deep neural network. To
this aim, the present work introduces DeepPDEM which utilizes the concept of
physics-informed networks to solve the evolution of the probability density via
proposing a deep learning method. DeepPDEM learns the General Density Evolution
Equation (GDEE) of stochastic structures. This approach paves the way for a
mesh-free learning method that can solve the density evolution problem with-out
prior simulation data. Moreover, it can also serve as an efficient surrogate
for the solu-tion at any other spatiotemporal points within optimization
schemes or real-time applica-tions. To demonstrate the potential applicability
of the proposed framework, two network architectures with different activation
functions as well as two optimizers are investigated. Numerical implementation
on three different problems verifies the accuracy and efficacy of the proposed
method.
Related papers
- You are out of context! [0.0]
New data can act as forces stretching, compressing, or twisting the geometric relationships learned by a model.
We propose a novel drift detection methodology for machine learning (ML) models based on the concept of ''deformation'' in the vector space representation of data.
arXiv Detail & Related papers (2024-11-04T10:17:43Z) - Dynamical Measure Transport and Neural PDE Solvers for Sampling [77.38204731939273]
We tackle the task of sampling from a probability density as transporting a tractable density function to the target.
We employ physics-informed neural networks (PINNs) to approximate the respective partial differential equations (PDEs) solutions.
PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently.
arXiv Detail & Related papers (2024-07-10T17:39:50Z) - Predicting Probabilities of Error to Combine Quantization and Early Exiting: QuEE [68.6018458996143]
We propose a more general dynamic network that can combine both quantization and early exit dynamic network: QuEE.
Our algorithm can be seen as a form of soft early exiting or input-dependent compression.
The crucial factor of our approach is accurate prediction of the potential accuracy improvement achievable through further computation.
arXiv Detail & Related papers (2024-06-20T15:25:13Z) - Deep Learning-based surrogate models for parametrized PDEs: handling
geometric variability through graph neural networks [0.0]
This work explores the potential usage of graph neural networks (GNNs) for the simulation of time-dependent PDEs.
We propose a systematic strategy to build surrogate models based on a data-driven time-stepping scheme.
We show that GNNs can provide a valid alternative to traditional surrogate models in terms of computational efficiency and generalization to new scenarios.
arXiv Detail & Related papers (2023-08-03T08:14:28Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - FEM-based Real-Time Simulations of Large Deformations with Probabilistic
Deep Learning [1.2617078020344616]
We propose a highly efficient deep-learning surrogate framework that is able to predict the response of hyper-elastic bodies under load.
The framework takes the form of special convolutional neural network architecture, so-called U-Net, which is trained with force-displacement data.
arXiv Detail & Related papers (2021-11-02T20:05:22Z) - Training multi-objective/multi-task collocation physics-informed neural
network with student/teachers transfer learnings [0.0]
This paper presents a PINN training framework that employs pre-training steps and a net-to-net knowledge transfer algorithm.
A multi-objective optimization algorithm may improve the performance of a physical-informed neural network with competing constraints.
arXiv Detail & Related papers (2021-07-24T00:43:17Z) - Influence Estimation and Maximization via Neural Mean-Field Dynamics [60.91291234832546]
We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
arXiv Detail & Related papers (2021-06-03T00:02:05Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.