Feature Engineering with Regularity Structures
- URL: http://arxiv.org/abs/2108.05879v2
- Date: Tue, 21 Nov 2023 10:04:37 GMT
- Title: Feature Engineering with Regularity Structures
- Authors: Ilya Chevyrev, Andris Gerasimovics, Hendrik Weber
- Abstract summary: We investigate the use of models from the theory of regularity structures as features in machine learning tasks.
We provide a flexible definition of a model feature vector associated to a space-time signal, along with two algorithms which illustrate ways in which these features can be combined with linear regression.
We apply these algorithms in several numerical experiments designed to learn solutions to PDEs with a given forcing and boundary data.
- Score: 4.082216579462797
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate the use of models from the theory of regularity structures as
features in machine learning tasks. A model is a polynomial function of a
space-time signal designed to well-approximate solutions to partial
differential equations (PDEs), even in low regularity regimes. Models can be
seen as natural multi-dimensional generalisations of signatures of paths; our
work therefore aims to extend the recent use of signatures in data science
beyond the context of time-ordered data. We provide a flexible definition of a
model feature vector associated to a space-time signal, along with two
algorithms which illustrate ways in which these features can be combined with
linear regression. We apply these algorithms in several numerical experiments
designed to learn solutions to PDEs with a given forcing and boundary data. Our
experiments include semi-linear parabolic and wave equations with forcing, and
Burgers' equation with no forcing. We find an advantage in favour of our
algorithms when compared to several alternative methods. Additionally, in the
experiment with Burgers' equation, we find non-trivial predictive power when
noise is added to the observations.
Related papers
- On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Physics-Informed Quantum Machine Learning: Solving nonlinear
differential equations in latent spaces without costly grid evaluations [21.24186888129542]
We propose a physics-informed quantum algorithm to solve nonlinear and multidimensional differential equations.
By measuring the overlaps between states which are representations of DE terms, we construct a loss that does not require independent sequential function evaluations on grid points.
When the loss is trained variationally, our approach can be related to the differentiable quantum circuit protocol.
arXiv Detail & Related papers (2023-08-03T15:38:31Z) - A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - Constraining Gaussian Processes to Systems of Linear Ordinary
Differential Equations [5.33024001730262]
LODE-GPs follow a system of linear homogeneous ODEs with constant coefficients.
We show the effectiveness of LODE-GPs in a number of experiments.
arXiv Detail & Related papers (2022-08-26T09:16:53Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - A Probabilistic State Space Model for Joint Inference from Differential
Equations and Data [23.449725313605835]
We show a new class of solvers for ordinary differential equations (ODEs) that phrase the solution process directly in terms of Bayesian filtering.
It then becomes possible to perform approximate Bayesian inference on the latent force as well as the ODE solution in a single, linear complexity pass of an extended Kalman filter.
We demonstrate the expressiveness and performance of the algorithm by training a non-parametric SIRD model on data from the COVID-19 outbreak.
arXiv Detail & Related papers (2021-03-18T10:36:09Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Neural Controlled Differential Equations for Irregular Time Series [17.338923885534197]
An ordinary differential equation is determined by its initial condition, and there is no mechanism for adjusting the trajectory based on subsequent observations.
Here we demonstrate how this may be resolved through the well-understood mathematics of emphcontrolled differential equations
We show that our model achieves state-of-the-art performance against similar (ODE or RNN based) models in empirical studies on a range of datasets.
arXiv Detail & Related papers (2020-05-18T17:52:21Z) - The data-driven physical-based equations discovery using evolutionary
approach [77.34726150561087]
We describe the algorithm for the mathematical equations discovery from the given observations data.
The algorithm combines genetic programming with the sparse regression.
It could be used for governing analytical equation discovery as well as for partial differential equations (PDE) discovery.
arXiv Detail & Related papers (2020-04-03T17:21:57Z) - Enhancement of shock-capturing methods via machine learning [0.0]
We develop an improved finite-volume method for simulating PDEs with discontinuous solutions.
We train a neural network to improve the results of a fifth-order WENO method.
We find that our method outperforms WENO in simulations where the numerical solution becomes overly diffused.
arXiv Detail & Related papers (2020-02-06T21:51:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.