Deep neural networks with ReLU, leaky ReLU, and softplus activation provably overcome the curse of dimensionality for space-time solutions of semilinear partial differential equations
- URL: http://arxiv.org/abs/2406.10876v1
- Date: Sun, 16 Jun 2024 09:59:29 GMT
- Title: Deep neural networks with ReLU, leaky ReLU, and softplus activation provably overcome the curse of dimensionality for space-time solutions of semilinear partial differential equations
- Authors: Julia Ackermann, Arnulf Jentzen, Benno Kuckuck, Joshua Lee Padgett,
- Abstract summary: It is a challenging topic in applied mathematics to solve high-dimensional nonlinear partial differential equations (PDEs)
Deep learning (DL) based methods for PDEs in which deep neural networks (DNNs) are used to approximate solutions of PDEs are presented.
- Score: 3.3123773366516645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is a challenging topic in applied mathematics to solve high-dimensional nonlinear partial differential equations (PDEs). Standard approximation methods for nonlinear PDEs suffer under the curse of dimensionality (COD) in the sense that the number of computational operations of the approximation method grows at least exponentially in the PDE dimension and with such methods it is essentially impossible to approximately solve high-dimensional PDEs even when the fastest currently available computers are used. However, in the last years great progress has been made in this area of research through suitable deep learning (DL) based methods for PDEs in which deep neural networks (DNNs) are used to approximate solutions of PDEs. Despite the remarkable success of such DL methods in simulations, it remains a fundamental open problem of research to prove (or disprove) that such methods can overcome the COD in the approximation of PDEs. However, there are nowadays several partial error analysis results for DL methods for high-dimensional nonlinear PDEs in the literature which prove that DNNs can overcome the COD in the sense that the number of parameters of the approximating DNN grows at most polynomially in both the reciprocal of the prescribed approximation accuracy $\varepsilon>0$ and the PDE dimension $d\in\mathbb{N}$. In the main result of this article we prove that for all $T,p\in(0,\infty)$ it holds that solutions $u_d\colon[0,T]\times\mathbb{R}^d\to\mathbb{R}$, $d\in\mathbb{N}$, of semilinear heat equations with Lipschitz continuous nonlinearities can be approximated in the $L^p$-sense on space-time regions without the COD by DNNs with the rectified linear unit (ReLU), the leaky ReLU, or the softplus activation function. In previous articles similar results have been established not for space-time regions but for the solutions $u_d(T,\cdot)$, $d\in\mathbb{N}$, at the terminal time $T$.
Related papers
- Unisolver: PDE-Conditional Transformers Are Universal PDE Solvers [55.0876373185983]
We present the Universal PDE solver (Unisolver) capable of solving a wide scope of PDEs.
Our key finding is that a PDE solution is fundamentally under the control of a series of PDE components.
Unisolver achieves consistent state-of-the-art results on three challenging large-scale benchmarks.
arXiv Detail & Related papers (2024-05-27T15:34:35Z) - RoPINN: Region Optimized Physics-Informed Neural Networks [66.38369833561039]
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
arXiv Detail & Related papers (2024-05-23T09:45:57Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Deep neural networks with ReLU, leaky ReLU, and softplus activation
provably overcome the curse of dimensionality for Kolmogorov partial
differential equations with Lipschitz nonlinearities in the $L^p$-sense [3.3123773366516645]
We show that deep neural networks (DNNs) have the expressive power to approximate PDE solutions without the curse of dimensionality (COD)
It is the key contribution of this work to generalize this result by establishing this statement in the $Lp$-sense with $pin(0,infty)$.
arXiv Detail & Related papers (2023-09-24T18:58:18Z) - Tackling the Curse of Dimensionality with Physics-Informed Neural Networks [24.86574584293979]
We develop a new method of scaling up physics-informed neural networks (PINNs) to solve arbitrary high-dimensional PDEs.
We demonstrate in various tests that the proposed method can solve many notoriously hard high-dimensional PDEs.
arXiv Detail & Related papers (2023-07-23T12:18:12Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Deep learning approximations for non-local nonlinear PDEs with Neumann
boundary conditions [2.449909275410288]
We propose two numerical methods based on machine learning and on Picard iterations, respectively, to approximately solve non-local nonlinear PDEs.
We evaluate the performance of the two methods on five different PDEs arising in physics and biology.
arXiv Detail & Related papers (2022-05-07T15:47:17Z) - Neural Q-learning for solving PDEs [0.0]
We develop a new numerical method for solving elliptic-type PDEs by adapting the Q-learning algorithm in reinforcement learning.
Our "Q-PDE" algorithm is mesh-free and therefore has the potential to overcome the curse of dimensionality.
The numerical performance of the Q-PDE algorithm is studied for several elliptic PDEs.
arXiv Detail & Related papers (2022-03-31T15:52:44Z) - Lie Point Symmetry Data Augmentation for Neural PDE Solvers [69.72427135610106]
We present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity.
In the context of PDEs, it turns out that we are able to quantitatively derive an exhaustive list of data transformations.
We show how it can easily be deployed to improve neural PDE solver sample complexity by an order of magnitude.
arXiv Detail & Related papers (2022-02-15T18:43:17Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Space-time deep neural network approximations for high-dimensional partial differential equations [3.6185342807265415]
Deep learning approximations might have the capacity to overcome the curse of dimensionality.
This article proves for every $ainmathbbR$, $ bin (a,infty)$ that solutions of certain Kolmogorov PDEs can be approximated by DNNs without the curse of dimensionality.
arXiv Detail & Related papers (2020-06-03T12:14:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.