Fourier-RNNs for Modelling Noisy Physics Data
- URL: http://arxiv.org/abs/2302.06534v1
- Date: Mon, 13 Feb 2023 17:22:07 GMT
- Title: Fourier-RNNs for Modelling Noisy Physics Data
- Authors: Vignesh Gopakumar, Stanislas Pamela, Lorenzo Zanisi
- Abstract summary: We propose a novel sequential model built to handle Physics relevant data by way of amalgamating the conventional RNN architecture with that of the Fourier Neural Operators (FNO)
While the Fourier-RNN performs identical to the FNO when handling PDE data, it outperforms the FNO and the conventional RNN when deployed in modelling noisy, non-Markovian data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classical sequential models employed in time-series prediction rely on
learning the mappings from the past to the future instances by way of a hidden
state. The Hidden states characterise the historical information and encode the
required temporal dependencies. However, most existing sequential models
operate within finite-dimensional Euclidean spaces which offer limited
functionality when employed in modelling physics relevant data. Alternatively
recent work with neural operator learning within the Fourier space has shown
efficient strategies for parameterising Partial Differential Equations (PDE).
In this work, we propose a novel sequential model, built to handle Physics
relevant data by way of amalgamating the conventional RNN architecture with
that of the Fourier Neural Operators (FNO). The Fourier-RNN allows for learning
the mappings from the input to the output as well as to the hidden state within
the Fourier space associated with the temporal data. While the Fourier-RNN
performs identical to the FNO when handling PDE data, it outperforms the FNO
and the conventional RNN when deployed in modelling noisy, non-Markovian data.
Related papers
- From Fourier to Neural ODEs: Flow Matching for Modeling Complex Systems [20.006163951844357]
We propose a simulation-free framework for training neural ordinary differential equations (NODEs)
We employ the Fourier analysis to estimate temporal and potential high-order spatial gradients from noisy observational data.
Our approach outperforms state-of-the-art methods in terms of training time, dynamics prediction, and robustness.
arXiv Detail & Related papers (2024-05-19T13:15:23Z) - Discretization Error of Fourier Neural Operators [5.121705282248479]
Operator learning is a variant of machine learning that is designed to approximate maps between function spaces from data.
The Fourier Neural Operator (FNO) is a common model architecture used for operator learning.
arXiv Detail & Related papers (2024-05-03T16:28:05Z) - Equivariant Graph Neural Operator for Modeling 3D Dynamics [148.98826858078556]
We propose Equivariant Graph Neural Operator (EGNO) to directly models dynamics as trajectories instead of just next-step prediction.
EGNO explicitly learns the temporal evolution of 3D dynamics where we formulate the dynamics as a function over time and learn neural operators to approximate it.
Comprehensive experiments in multiple domains, including particle simulations, human motion capture, and molecular dynamics, demonstrate the significantly superior performance of EGNO against existing methods.
arXiv Detail & Related papers (2024-01-19T21:50:32Z) - Spherical Fourier Neural Operators: Learning Stable Dynamics on the
Sphere [53.63505583883769]
We introduce Spherical FNOs (SFNOs) for learning operators on spherical geometries.
SFNOs have important implications for machine learning-based simulation of climate dynamics.
arXiv Detail & Related papers (2023-06-06T16:27:17Z) - Learning Flow Functions from Data with Applications to Nonlinear
Oscillators [0.0]
We show that learning the flow function is equivalent to learning the input-to-state map of a discrete-time dynamical system.
This motivates the use of an RNN together with encoder and decoder networks which map the state of the system to the hidden state of the RNN and back.
arXiv Detail & Related papers (2023-03-29T13:04:04Z) - Enhancing Spatiotemporal Prediction Model using Modular Design and
Beyond [2.323220706791067]
It is challenging to predict sequence varies both in time and space.
The mainstream method is to model and spatial temporal structures at the same time.
A modular design is proposed, which embeds sequence model into two modules: a spatial encoder-decoder and a predictor.
arXiv Detail & Related papers (2022-10-04T10:09:35Z) - Solving Seismic Wave Equations on Variable Velocity Models with Fourier
Neural Operator [3.2307366446033945]
We propose a new framework paralleled Fourier neural operator (PFNO) for efficiently training the FNO-based solver.
Numerical experiments demonstrate the high accuracy of both FNO and PFNO with complicated velocity models.
PFNO admits higher computational efficiency on large-scale testing datasets, compared with the traditional finite-difference method.
arXiv Detail & Related papers (2022-09-25T22:25:57Z) - Fourier Neural Operator with Learned Deformations for PDEs on General Geometries [75.91055304134258]
We propose a new framework, viz., geo-FNO, to solve PDEs on arbitrary geometries.
Geo-FNO learns to deform the input (physical) domain, which may be irregular, into a latent space with a uniform grid.
We consider a variety of PDEs such as the Elasticity, Plasticity, Euler's, and Navier-Stokes equations, and both forward modeling and inverse design problems.
arXiv Detail & Related papers (2022-07-11T21:55:47Z) - Factorized Fourier Neural Operators [77.47313102926017]
The Factorized Fourier Neural Operator (F-FNO) is a learning-based method for simulating partial differential equations.
We show that our model maintains an error rate of 2% while still running an order of magnitude faster than a numerical solver.
arXiv Detail & Related papers (2021-11-27T03:34:13Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.