Multi-scale Time-stepping of Partial Differential Equations with
Transformers
- URL: http://arxiv.org/abs/2311.02225v1
- Date: Fri, 3 Nov 2023 20:26:43 GMT
- Title: Multi-scale Time-stepping of Partial Differential Equations with
Transformers
- Authors: AmirPouya Hemmasian, Amir Barati Farimani
- Abstract summary: We develop fast surrogates for Partial Differential Equations (PDEs)
Our model achieves similar or better results in predicting the time-evolution of Navier-Stokes equations.
- Score: 8.430481660019451
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Developing fast surrogates for Partial Differential Equations (PDEs) will
accelerate design and optimization in almost all scientific and engineering
applications. Neural networks have been receiving ever-increasing attention and
demonstrated remarkable success in computational modeling of PDEs, however;
their prediction accuracy is not at the level of full deployment. In this work,
we utilize the transformer architecture, the backbone of numerous
state-of-the-art AI models, to learn the dynamics of physical systems as the
mixing of spatial patterns learned by a convolutional autoencoder. Moreover, we
incorporate the idea of multi-scale hierarchical time-stepping to increase the
prediction speed and decrease accumulated error over time. Our model achieves
similar or better results in predicting the time-evolution of Navier-Stokes
equations compared to the powerful Fourier Neural Operator (FNO) and two
transformer-based neural operators OFormer and Galerkin Transformer.
Related papers
- MultiPDENet: PDE-embedded Learning with Multi-time-stepping for Accelerated Flow Simulation [48.41289705783405]
We propose a PDE-embedded network with multiscale time stepping (MultiPDENet)
In particular, we design a convolutional filter based on the structure of finite difference with a small number of parameters to optimize.
A Physics Block with a 4th-order Runge-Kutta integrator at the fine time scale is established that embeds the structure of PDEs to guide the prediction.
arXiv Detail & Related papers (2025-01-27T12:15:51Z) - Koopman Theory-Inspired Method for Learning Time Advancement Operators in Unstable Flame Front Evolution [0.2812395851874055]
This study introduces Koopman-inspired Fourier Neural Operators (kFNO) and Convolutional Neural Networks (kCNN) to learn solution advancement operators for flame front instabilities.
By transforming data into a high-dimensional latent space, these models achieve more accurate multi-step predictions compared to traditional methods.
arXiv Detail & Related papers (2024-12-11T14:47:19Z) - Liquid Fourier Latent Dynamics Networks for fast GPU-based numerical simulations in computational cardiology [0.0]
We propose an extension of Latent Dynamics Networks (LDNets) to create parameterized space-time surrogate models for multiscale and multiphysics sets of highly nonlinear differential equations on complex geometries.
LFLDNets employ a neurologically-inspired, sparse liquid neural network for temporal dynamics, relaxing the requirement of a numerical solver for time advancement and leading to superior performance in terms of parameters, accuracy, efficiency and learned trajectories.
arXiv Detail & Related papers (2024-08-19T09:14:25Z) - Accelerating Phase Field Simulations Through a Hybrid Adaptive Fourier Neural Operator with U-Net Backbone [0.7329200485567827]
We propose U-Shaped Adaptive Fourier Neural Operators (U-AFNO), a machine learning (ML) model inspired by recent advances in neural operator learning.
We use U-AFNOs to learn the dynamics mapping the field at a current time step into a later time step.
Our model reproduces the key micro-structure statistics and QoIs with a level of accuracy on-par with the high-fidelity numerical solver.
arXiv Detail & Related papers (2024-06-24T20:13:23Z) - Equivariant Graph Neural Operator for Modeling 3D Dynamics [148.98826858078556]
We propose Equivariant Graph Neural Operator (EGNO) to directly models dynamics as trajectories instead of just next-step prediction.
EGNO explicitly learns the temporal evolution of 3D dynamics where we formulate the dynamics as a function over time and learn neural operators to approximate it.
Comprehensive experiments in multiple domains, including particle simulations, human motion capture, and molecular dynamics, demonstrate the significantly superior performance of EGNO against existing methods.
arXiv Detail & Related papers (2024-01-19T21:50:32Z) - Neural Operators for Accelerating Scientific Simulations and Design [85.89660065887956]
An AI framework, known as Neural Operators, presents a principled framework for learning mappings between functions defined on continuous domains.
Neural Operators can augment or even replace existing simulators in many applications, such as computational fluid dynamics, weather forecasting, and material modeling.
arXiv Detail & Related papers (2023-09-27T00:12:07Z) - Towards Multi-spatiotemporal-scale Generalized PDE Modeling [4.924631198058705]
We make a comparison between various FNO and U-Net like approaches on fluid mechanics problems in both vorticity-stream and velocity function form.
We show promising results on generalization to different PDE parameters and time-scales with a single surrogate model.
arXiv Detail & Related papers (2022-09-30T17:40:05Z) - On Fast Simulation of Dynamical System with Neural Vector Enhanced
Numerical Solver [59.13397937903832]
We introduce a deep learning-based corrector called Neural Vector (NeurVec)
NeurVec can compensate for integration errors and enable larger time step sizes in simulations.
Our experiments on a variety of complex dynamical system benchmarks demonstrate that NeurVec exhibits remarkable generalization capability.
arXiv Detail & Related papers (2022-08-07T09:02:18Z) - Learning to Accelerate Partial Differential Equations via Latent Global
Evolution [64.72624347511498]
Latent Evolution of PDEs (LE-PDE) is a simple, fast and scalable method to accelerate the simulation and inverse optimization of PDEs.
We introduce new learning objectives to effectively learn such latent dynamics to ensure long-term stability.
We demonstrate up to 128x reduction in the dimensions to update, and up to 15x improvement in speed, while achieving competitive accuracy.
arXiv Detail & Related papers (2022-06-15T17:31:24Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.