Solving high-dimensional parabolic PDEs using the tensor train format
- URL: http://arxiv.org/abs/2102.11830v1
- Date: Tue, 23 Feb 2021 18:04:00 GMT
- Title: Solving high-dimensional parabolic PDEs using the tensor train format
- Authors: Lorenz Richter, Leon Sallandt, Nikolas N\"usken
- Abstract summary: We argue that tensor trains provide an appealing approximation framework for parabolic PDEs.
We develop novel iterative schemes, involving either explicit and fast or implicit and accurate updates.
- Score: 1.1470070927586016
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: High-dimensional partial differential equations (PDEs) are ubiquitous in
economics, science and engineering. However, their numerical treatment poses
formidable challenges since traditional grid-based methods tend to be
frustrated by the curse of dimensionality. In this paper, we argue that tensor
trains provide an appealing approximation framework for parabolic PDEs: the
combination of reformulations in terms of backward stochastic differential
equations and regression-type methods in the tensor format holds the promise of
leveraging latent low-rank structures enabling both compression and efficient
computation. Following this paradigm, we develop novel iterative schemes,
involving either explicit and fast or implicit and accurate updates. We
demonstrate in a number of examples that our methods achieve a favorable
trade-off between accuracy and computational efficiency in comparison with
state-of-the-art neural network based approaches.
Related papers
- Partial-differential-algebraic equations of nonlinear dynamics by Physics-Informed Neural-Network: (I) Operator splitting and framework assessment [51.3422222472898]
Several forms for constructing novel physics-informed-networks (PINN) for the solution of partial-differential-algebraic equations are proposed.
Among these novel methods are the PDE forms, which evolve from the lower-level form with fewer unknown dependent variables to higher-level form with more dependent variables.
arXiv Detail & Related papers (2024-07-13T22:48:17Z) - Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - From continuous-time formulations to discretization schemes: tensor
trains and robust regression for BSDEs and parabolic PDEs [3.785123406103385]
We argue that tensor trains provide an appealing framework for parabolic PDEs.
We develop iterative schemes, which differ in terms of computational efficiency and robustness.
We demonstrate both theoretically and numerically that our methods can achieve a favorable trade-off between accuracy and computational efficiency.
arXiv Detail & Related papers (2023-07-28T11:44:06Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - Learning Subgrid-scale Models with Neural Ordinary Differential
Equations [0.39160947065896795]
We propose a new approach to learning the subgrid-scale model when simulating partial differential equations (PDEs)
In this approach neural networks are used to learn the coarse- to fine-grid map, which can be viewed as subgrid-scale parameterization.
Our method inherits the advantages of NODEs and can be used to parameterize subgrid scales, approximate coupling operators, and improve the efficiency of low-order solvers.
arXiv Detail & Related papers (2022-12-20T02:45:09Z) - Robust SDE-Based Variational Formulations for Solving Linear PDEs via
Deep Learning [6.1678491628787455]
Combination of Monte Carlo methods and deep learning has led to efficient algorithms for solving partial differential equations (PDEs) in high dimensions.
Related learning problems are often stated as variational formulations based on associated differential equations (SDEs)
It is therefore crucial to rely on adequate gradient estimators that exhibit low variance in order to reach convergence accurately and swiftly.
arXiv Detail & Related papers (2022-06-21T17:59:39Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Score-based Generative Modeling of Graphs via the System of Stochastic
Differential Equations [57.15855198512551]
We propose a novel score-based generative model for graphs with a continuous-time framework.
We show that our method is able to generate molecules that lie close to the training distribution yet do not violate the chemical valency rule.
arXiv Detail & Related papers (2022-02-05T08:21:04Z) - Approximate Latent Force Model Inference [1.3927943269211591]
latent force models offer an interpretable alternative to purely data driven tools for inference in dynamical systems.
We show that a neural operator approach can scale our model to thousands of instances, enabling fast, distributed computation.
arXiv Detail & Related papers (2021-09-24T09:55:00Z) - Hybrid FEM-NN models: Combining artificial neural networks with the
finite element method [0.0]
We present a methodology combining neural networks with physical principle constraints in the form of partial differential equations (PDEs)
The approach allows to train neural networks while respecting the PDEs as a strong constraint in the optimisation as apposed to making them part of the loss function.
We demonstrate the method on a complex cardiac cell model problem using deep neural networks.
arXiv Detail & Related papers (2021-01-04T13:36:06Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.