Koopman Theory-Inspired Method for Learning Time Advancement Operators in Unstable Flame Front Evolution
- URL: http://arxiv.org/abs/2412.08426v1
- Date: Wed, 11 Dec 2024 14:47:19 GMT
- Title: Koopman Theory-Inspired Method for Learning Time Advancement Operators in Unstable Flame Front Evolution
- Authors: Rixin Yu, Marco Herbert, Markus Klein, Erdzan Hodzic,
- Abstract summary: This study introduces Koopman-inspired Fourier Neural Operators (kFNO) and Convolutional Neural Networks (kCNN) to learn solution advancement operators for flame front instabilities.
By transforming data into a high-dimensional latent space, these models achieve more accurate multi-step predictions compared to traditional methods.
- Score: 0.2812395851874055
- License:
- Abstract: Predicting the evolution of complex systems governed by partial differential equations (PDEs) remains challenging, especially for nonlinear, chaotic behaviors. This study introduces Koopman-inspired Fourier Neural Operators (kFNO) and Convolutional Neural Networks (kCNN) to learn solution advancement operators for flame front instabilities. By transforming data into a high-dimensional latent space, these models achieve more accurate multi-step predictions compared to traditional methods. Benchmarking across one- and two-dimensional flame front scenarios demonstrates the proposed approaches' superior performance in short-term accuracy and long-term statistical reproduction, offering a promising framework for modeling complex dynamical systems.
Related papers
- Koopman-Equivariant Gaussian Processes [39.34668284375732]
We propose a family of Gaussian processes (GP) for dynamical systems with linear time-invariant responses.
This linearity allows us to tractably quantify forecasting and representational uncertainty.
Experiments demonstrate on-par and often better forecasting performance compared to kernel-based methods for learning dynamical systems.
arXiv Detail & Related papers (2025-02-10T16:35:08Z) - On the relationship between Koopman operator approximations and neural ordinary differential equations for data-driven time-evolution predictions [0.0]
We show that extended dynamic mode decomposition with dictionary learning (EDMD-DL) is equivalent to a neural network representation of the nonlinear discrete-time flow map on the state space.
We implement several variations of neural ordinary differential equations (ODEs) and EDMD-DL, developed by combining different aspects of their respective model structures and training procedures.
We evaluate these methods using numerical experiments on chaotic dynamics in the Lorenz system and a nine-mode model of turbulent shear flow.
arXiv Detail & Related papers (2024-11-20T00:18:46Z) - Learning Flame Evolution Operator under Hybrid Darrieus Landau and Diffusive Thermal Instability [0.0]
This paper explores the application of novel operator learning methodologies to unravel the dynamics of flame instability.
Training datasets encompass a wide range of parameter configurations, enabling the learning of parametric solution advancement operators.
Results demonstrate the efficacy of these methods in accurately predicting short-term and long-term flame evolution.
arXiv Detail & Related papers (2024-05-11T18:31:13Z) - Equivariant Graph Neural Operator for Modeling 3D Dynamics [148.98826858078556]
We propose Equivariant Graph Neural Operator (EGNO) to directly models dynamics as trajectories instead of just next-step prediction.
EGNO explicitly learns the temporal evolution of 3D dynamics where we formulate the dynamics as a function over time and learn neural operators to approximate it.
Comprehensive experiments in multiple domains, including particle simulations, human motion capture, and molecular dynamics, demonstrate the significantly superior performance of EGNO against existing methods.
arXiv Detail & Related papers (2024-01-19T21:50:32Z) - Koopman Kernel Regression [6.116741319526748]
We show that Koopman operator theory offers a beneficial paradigm for characterizing forecasts via linear time-invariant (LTI) ODEs.
We derive a universal Koopman-invariant kernel reproducing Hilbert space (RKHS) that solely spans transformations into LTI dynamical systems.
Our experiments demonstrate superior forecasting performance compared to Koopman operator and sequential data predictors.
arXiv Detail & Related papers (2023-05-25T16:22:22Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Learning PDE Solution Operator for Continuous Modeling of Time-Series [1.39661494747879]
This work presents a partial differential equation (PDE) based framework which improves the dynamics modeling capability.
We propose a neural operator that can handle time continuously without requiring iterative operations or specific grids of temporal discretization.
Our framework opens up a new way for a continuous representation of neural networks that can be readily adopted for real-world applications.
arXiv Detail & Related papers (2023-02-02T03:47:52Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Forecasting Sequential Data using Consistent Koopman Autoencoders [52.209416711500005]
A new class of physics-based methods related to Koopman theory has been introduced, offering an alternative for processing nonlinear dynamical systems.
We propose a novel Consistent Koopman Autoencoder model which, unlike the majority of existing work, leverages the forward and backward dynamics.
Key to our approach is a new analysis which explores the interplay between consistent dynamics and their associated Koopman operators.
arXiv Detail & Related papers (2020-03-04T18:24:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.