Diffusion models as probabilistic neural operators for recovering unobserved states of dynamical systems
- URL: http://arxiv.org/abs/2405.07097v1
- Date: Sat, 11 May 2024 21:23:55 GMT
- Title: Diffusion models as probabilistic neural operators for recovering unobserved states of dynamical systems
- Authors: Katsiaryna Haitsiukevich, Onur Poyraz, Pekka Marttinen, Alexander Ilin,
- Abstract summary: We show that diffusion-based generative models exhibit many properties favourable for neural operators.
We propose to train a single model adaptable to multiple tasks, by alternating between the tasks during training.
- Score: 49.2319247825857
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper explores the efficacy of diffusion-based generative models as neural operators for partial differential equations (PDEs). Neural operators are neural networks that learn a mapping from the parameter space to the solution space of PDEs from data, and they can also solve the inverse problem of estimating the parameter from the solution. Diffusion models excel in many domains, but their potential as neural operators has not been thoroughly explored. In this work, we show that diffusion-based generative models exhibit many properties favourable for neural operators, and they can effectively generate the solution of a PDE conditionally on the parameter or recover the unobserved parts of the system. We propose to train a single model adaptable to multiple tasks, by alternating between the tasks during training. In our experiments with multiple realistic dynamical systems, diffusion models outperform other neural operators. Furthermore, we demonstrate how the probabilistic diffusion model can elegantly deal with systems which are only partially identifiable, by producing samples corresponding to the different possible solutions.
Related papers
- Linearization Turns Neural Operators into Function-Valued Gaussian Processes [23.85470417458593]
We introduce a new framework for approximate Bayesian uncertainty quantification in neural operators.
Our approach can be interpreted as a probabilistic analogue of the concept of currying from functional programming.
We showcase the efficacy of our approach through applications to different types of partial differential equations.
arXiv Detail & Related papers (2024-06-07T16:43:54Z) - Automatic Differentiation is Essential in Training Neural Networks for Solving Differential Equations [7.890817997914349]
Neural network-based approaches have recently shown significant promise in solving partial differential equations (PDEs) in science and engineering.
One advantage of the neural network methods for PDEs lies in its automatic differentiation (AD)
In this paper, we quantitatively demonstrate the advantage of AD in training neural networks.
arXiv Detail & Related papers (2024-05-23T02:01:05Z) - Latent Neural PDE Solver: a reduced-order modelling framework for
partial differential equations [6.173339150997772]
We propose to learn the dynamics of the system in the latent space with much coarser discretizations.
A non-linear autoencoder is first trained to project the full-order representation of the system onto the mesh-reduced space.
We showcase that it has competitive accuracy and efficiency compared to the neural PDE solver that operates on full-order space.
arXiv Detail & Related papers (2024-02-27T19:36:27Z) - Investigating Neuron Disturbing in Fusing Heterogeneous Neural Networks [6.389882065284252]
In this paper, we reveal the phenomenon of neuron disturbing, where neurons from heterogeneous local models interfere with each other mutually.
We propose an experimental method that excludes neuron disturbing and fuses neural networks via adaptively selecting a local model, called AMS, to execute the prediction.
arXiv Detail & Related papers (2022-10-24T06:47:48Z) - Neural Implicit Representations for Physical Parameter Inference from a Single Video [49.766574469284485]
We propose to combine neural implicit representations for appearance modeling with neural ordinary differential equations (ODEs) for modelling physical phenomena.
Our proposed model combines several unique advantages: (i) Contrary to existing approaches that require large training datasets, we are able to identify physical parameters from only a single video.
The use of neural implicit representations enables the processing of high-resolution videos and the synthesis of photo-realistic images.
arXiv Detail & Related papers (2022-04-29T11:55:35Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.