A Deep Learning approach to Reduced Order Modelling of Parameter
Dependent Partial Differential Equations
- URL: http://arxiv.org/abs/2103.06183v1
- Date: Wed, 10 Mar 2021 17:01:42 GMT
- Title: A Deep Learning approach to Reduced Order Modelling of Parameter
Dependent Partial Differential Equations
- Authors: Nicola R. Franco, Andrea Manzoni, Paolo Zunino
- Abstract summary: We develop a constructive approach based on Deep Neural Networks for the efficient approximation of the parameter-to-solution map.
In particular, we consider parametrized advection-diffusion PDEs, and we test the methodology in the presence of strong transport fields.
- Score: 0.2148535041822524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Within the framework of parameter dependent PDEs, we develop a constructive
approach based on Deep Neural Networks for the efficient approximation of the
parameter-to-solution map. The research is motivated by the limitations and
drawbacks of state-of-the-art algorithms, such as the Reduced Basis method,
when addressing problems that show a slow decay in the Kolmogorov n-width. Our
work is based on the use of deep autoencoders, which we employ for encoding and
decoding a high fidelity approximation of the solution manifold. In order to
fully exploit the approximation capabilities of neural networks, we consider a
nonlinear version of the Kolmogorov n-width over which we base the concept of a
minimal latent dimension. We show that this minimal dimension is intimately
related to the topological properties of the solution manifold, and we provide
some theoretical results with particular emphasis on second order elliptic
PDEs. Finally, we report numerical experiments where we compare the proposed
approach with classical POD-Galerkin reduced order models. In particular, we
consider parametrized advection-diffusion PDEs, and we test the methodology in
the presence of strong transport fields, singular terms and stochastic
coefficients.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - RoPINN: Region Optimized Physics-Informed Neural Networks [66.38369833561039]
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
arXiv Detail & Related papers (2024-05-23T09:45:57Z) - Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Multilevel CNNs for Parametric PDEs [0.0]
We combine concepts from multilevel solvers for partial differential equations with neural network based deep learning.
An in-depth theoretical analysis shows that the proposed architecture is able to approximate multigrid V-cycles to arbitrary precision.
We find substantial improvements over state-of-the-art deep learning-based solvers.
arXiv Detail & Related papers (2023-04-01T21:11:05Z) - Physics and Equality Constrained Artificial Neural Networks: Application
to Partial Differential Equations [1.370633147306388]
Physics-informed neural networks (PINNs) have been proposed to learn the solution of partial differential equations (PDE)
Here, we show that this specific way of formulating the objective function is the source of severe limitations in the PINN approach.
We propose a versatile framework that can tackle both inverse and forward problems.
arXiv Detail & Related papers (2021-09-30T05:55:35Z) - NTopo: Mesh-free Topology Optimization using Implicit Neural
Representations [35.07884509198916]
We present a novel machine learning approach to tackle topology optimization problems.
We use multilayer perceptrons (MLPs) to parameterize both density and displacement fields.
As we show through our experiments, a major benefit of our approach is that it enables self-supervised learning of continuous solution spaces.
arXiv Detail & Related papers (2021-02-22T05:25:22Z) - Solving high-dimensional Hamilton-Jacobi-Bellman PDEs using neural
networks: perspectives from the theory of controlled diffusions and measures
on path space [3.1219977244201056]
Building on recent machine learning inspired approaches towards high-dimensional PDEs, we investigate the potential of iterative diffusion techniques.
We develop a principled framework based on divergences between path measures, encompassing various existing methods.
The promise of the developed approach is exemplified by a range of high-dimensional and metastable numerical examples.
arXiv Detail & Related papers (2020-05-11T20:14:02Z) - Model Reduction and Neural Networks for Parametric PDEs [9.405458160620533]
We develop a framework for data-driven approximation of input-output maps between infinite-dimensional spaces.
The proposed approach is motivated by the recent successes of neural networks and deep learning.
For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology.
arXiv Detail & Related papers (2020-05-07T00:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.