Learning Neural PDE Solvers with Parameter-Guided Channel Attention
- URL: http://arxiv.org/abs/2304.14118v2
- Date: Fri, 21 Jul 2023 11:36:40 GMT
- Title: Learning Neural PDE Solvers with Parameter-Guided Channel Attention
- Authors: Makoto Takamoto, Francesco Alesiani, and Mathias Niepert
- Abstract summary: In application domains such as weather forecasting, molecular dynamics, and inverse design, ML-based surrogate models are increasingly used.
We propose a Channel Attention Embeddings (CAPE) component for neural surrogate models and a simple yet effective curriculum learning strategy.
The CAPE module can be combined with neural PDE solvers allowing them to adapt to unseen PDE parameters.
- Score: 17.004380150146268
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scientific Machine Learning (SciML) is concerned with the development of
learned emulators of physical systems governed by partial differential
equations (PDE). In application domains such as weather forecasting, molecular
dynamics, and inverse design, ML-based surrogate models are increasingly used
to augment or replace inefficient and often non-differentiable numerical
simulation algorithms. While a number of ML-based methods for approximating the
solutions of PDEs have been proposed in recent years, they typically do not
adapt to the parameters of the PDEs, making it difficult to generalize to PDE
parameters not seen during training. We propose a Channel Attention mechanism
guided by PDE Parameter Embeddings (CAPE) component for neural surrogate models
and a simple yet effective curriculum learning strategy. The CAPE module can be
combined with neural PDE solvers allowing them to adapt to unseen PDE
parameters. The curriculum learning strategy provides a seamless transition
between teacher-forcing and fully auto-regressive training. We compare CAPE in
conjunction with the curriculum learning strategy using a popular PDE benchmark
and obtain consistent and significant improvements over the baseline models.
The experiments also show several advantages of CAPE, such as its increased
ability to generalize to unseen PDE parameters without large increases
inference time and parameter count.
Related papers
- Unisolver: PDE-Conditional Transformers Are Universal PDE Solvers [55.0876373185983]
We present the Universal PDE solver (Unisolver) capable of solving a wide scope of PDEs.
Our key finding is that a PDE solution is fundamentally under the control of a series of PDE components.
Unisolver achieves consistent state-of-the-art results on three challenging large-scale benchmarks.
arXiv Detail & Related papers (2024-05-27T15:34:35Z) - Masked Autoencoders are PDE Learners [7.136205674624813]
Masked pretraining can consolidate heterogeneous physics to learn latent representations and perform latent PDE arithmetic.
neural solvers on learned latent representations can improve time-stepping and super-resolution performance across a variety of coefficients, discretizations, or boundary conditions.
arXiv Detail & Related papers (2024-03-26T14:17:01Z) - Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared
Pre-trained Language Models [109.06052781040916]
We introduce a technique to enhance the inference efficiency of parameter-shared language models.
We also propose a simple pre-training technique that leads to fully or partially shared models.
Results demonstrate the effectiveness of our methods on both autoregressive and autoencoding PLMs.
arXiv Detail & Related papers (2023-10-19T15:13:58Z) - Self-Supervised Learning with Lie Symmetries for Partial Differential
Equations [25.584036829191902]
We learn general-purpose representations of PDEs by implementing joint embedding methods for self-supervised learning (SSL)
Our representation outperforms baseline approaches to invariant tasks, such as regressing the coefficients of a PDE, while also improving the time-stepping performance of neural solvers.
We hope that our proposed methodology will prove useful in the eventual development of general-purpose foundation models for PDEs.
arXiv Detail & Related papers (2023-07-11T16:52:22Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Learning to Accelerate Partial Differential Equations via Latent Global
Evolution [64.72624347511498]
Latent Evolution of PDEs (LE-PDE) is a simple, fast and scalable method to accelerate the simulation and inverse optimization of PDEs.
We introduce new learning objectives to effectively learn such latent dynamics to ensure long-term stability.
We demonstrate up to 128x reduction in the dimensions to update, and up to 15x improvement in speed, while achieving competitive accuracy.
arXiv Detail & Related papers (2022-06-15T17:31:24Z) - Lie Point Symmetry Data Augmentation for Neural PDE Solvers [69.72427135610106]
We present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity.
In the context of PDEs, it turns out that we are able to quantitatively derive an exhaustive list of data transformations.
We show how it can easily be deployed to improve neural PDE solver sample complexity by an order of magnitude.
arXiv Detail & Related papers (2022-02-15T18:43:17Z) - A composable autoencoder-based iterative algorithm for accelerating
numerical simulations [0.0]
CoAE-MLSim is an unsupervised, lower-dimensional, local method that is motivated from key ideas used in commercial PDE solvers.
It is tested for a variety of complex engineering cases to demonstrate its computational speed, accuracy, scalability, and generalization across different PDE conditions.
arXiv Detail & Related papers (2021-10-07T20:22:37Z) - Adversarial Multi-task Learning Enhanced Physics-informed Neural
Networks for Solving Partial Differential Equations [9.823102211212582]
We introduce the novel approach of employing multi-task learning techniques, the uncertainty-weighting loss and the gradients surgery, in the context of learning PDE solutions.
In the experiments, our proposed methods are found to be effective and reduce the error on the unseen data points as compared to the previous approaches.
arXiv Detail & Related papers (2021-04-29T13:17:46Z) - Neural-PDE: A RNN based neural network for solving time dependent PDEs [6.560798708375526]
Partial differential equations (PDEs) play a crucial role in studying a vast number of problems in science and engineering.
We propose a sequence deep learning framework called Neural-PDE, which allows to automatically learn governing rules of any time-dependent PDE system.
In our experiments the Neural-PDE can efficiently extract the dynamics within 20 epochs training, and produces accurate predictions.
arXiv Detail & Related papers (2020-09-08T15:46:00Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.