Learning to Accelerate Partial Differential Equations via Latent Global
Evolution
- URL: http://arxiv.org/abs/2206.07681v1
- Date: Wed, 15 Jun 2022 17:31:24 GMT
- Title: Learning to Accelerate Partial Differential Equations via Latent Global
Evolution
- Authors: Tailin Wu and Takashi Maruyama and Jure Leskovec
- Abstract summary: Latent Evolution of PDEs (LE-PDE) is a simple, fast and scalable method to accelerate the simulation and inverse optimization of PDEs.
We introduce new learning objectives to effectively learn such latent dynamics to ensure long-term stability.
We demonstrate up to 128x reduction in the dimensions to update, and up to 15x improvement in speed, while achieving competitive accuracy.
- Score: 64.72624347511498
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Simulating the time evolution of Partial Differential Equations (PDEs) of
large-scale systems is crucial in many scientific and engineering domains such
as fluid dynamics, weather forecasting and their inverse optimization problems.
However, both classical solvers and recent deep learning-based surrogate models
are typically extremely computationally intensive, because of their local
evolution: they need to update the state of each discretized cell at each time
step during inference. Here we develop Latent Evolution of PDEs (LE-PDE), a
simple, fast and scalable method to accelerate the simulation and inverse
optimization of PDEs. LE-PDE learns a compact, global representation of the
system and efficiently evolves it fully in the latent space with learned latent
evolution models. LE-PDE achieves speed-up by having a much smaller latent
dimension to update during long rollout as compared to updating in the input
space. We introduce new learning objectives to effectively learn such latent
dynamics to ensure long-term stability. We further introduce techniques for
speeding-up inverse optimization of boundary conditions for PDEs via
backpropagation through time in latent space, and an annealing technique to
address the non-differentiability and sparse interaction of boundary
conditions. We test our method in a 1D benchmark of nonlinear PDEs, 2D
Navier-Stokes flows into turbulent phase and an inverse optimization of
boundary conditions in 2D Navier-Stokes flow. Compared to state-of-the-art deep
learning-based surrogate models and other strong baselines, we demonstrate up
to 128x reduction in the dimensions to update, and up to 15x improvement in
speed, while achieving competitive accuracy.
Related papers
- LE-PDE++: Mamba for accelerating PDEs Simulations [4.7505178698234625]
The Latent Evolution of PDEs method is designed to address the computational intensity of classical and deep learning-based PDE solvers.
Our method doubles the inference speed compared to the LE-PDE while retaining the same level of parameter efficiency.
arXiv Detail & Related papers (2024-11-04T09:04:11Z) - PDE-Refiner: Achieving Accurate Long Rollouts with Neural PDE Solvers [40.097474800631]
Time-dependent partial differential equations (PDEs) are ubiquitous in science and engineering.
Deep neural network based surrogates have gained increased interest.
arXiv Detail & Related papers (2023-08-10T17:53:05Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - A scalable deep learning approach for solving high-dimensional dynamic
optimal transport [18.67654056717166]
We propose a deep learning based method to solve the dynamic optimal transport in high dimensional space.
Our method contains three main ingredients: a carefully designed representation of the velocity field, the discretization of the PDE constraint along the characteristics, and the computation of high dimensional integral by Monte Carlo method in each time step.
arXiv Detail & Related papers (2022-05-16T08:56:05Z) - Long-time integration of parametric evolution equations with
physics-informed DeepONets [0.0]
We introduce an effective framework for learning infinite-dimensional operators that map random initial conditions to associated PDE solutions within a short time interval.
Global long-time predictions across a range of initial conditions can be then obtained by iteratively evaluating the trained model.
This introduces a new approach to temporal domain decomposition that is shown to be effective in performing accurate long-time simulations.
arXiv Detail & Related papers (2021-06-09T20:46:17Z) - Speeding up Computational Morphogenesis with Online Neural Synthetic
Gradients [51.42959998304931]
A wide range of modern science and engineering applications are formulated as optimization problems with a system of partial differential equations (PDEs) as constraints.
These PDE-constrained optimization problems are typically solved in a standard discretize-then-optimize approach.
We propose a general framework to speed up PDE-constrained optimization using online neural synthetic gradients (ONSG) with a novel two-scale optimization scheme.
arXiv Detail & Related papers (2021-04-25T22:43:51Z) - Neural Ordinary Differential Equations for Data-Driven Reduced Order
Modeling of Environmental Hydrodynamics [4.547988283172179]
We explore the use of Neural Ordinary Differential Equations for fluid flow simulation.
Test problems we consider include incompressible flow around a cylinder and real-world applications of shallow water hydrodynamics in riverine and estuarine systems.
Our findings indicate that Neural ODEs provide an elegant framework for stable and accurate evolution of latent-space dynamics with a promising potential of extrapolatory predictions.
arXiv Detail & Related papers (2021-04-22T19:20:47Z) - DiffPD: Differentiable Projective Dynamics with Contact [65.88720481593118]
We present DiffPD, an efficient differentiable soft-body simulator with implicit time integration.
We evaluate the performance of DiffPD and observe a speedup of 4-19 times compared to the standard Newton's method in various applications.
arXiv Detail & Related papers (2021-01-15T00:13:33Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.