Nonlinear dimensionality reduction then and now: AIMs for dissipative
PDEs in the ML era
- URL: http://arxiv.org/abs/2310.15816v1
- Date: Tue, 24 Oct 2023 13:10:43 GMT
- Title: Nonlinear dimensionality reduction then and now: AIMs for dissipative
PDEs in the ML era
- Authors: Eleni D. Koronaki, Nikolaos Evangelou, Cristina P. Martin-Linares,
Edriss S. Titi and Ioannis G. Kevrekidis
- Abstract summary: This study presents a collection of purely data-driven for constructing reduced-order models (ROMs) for distributed dynamical systems.
The particular motivation is the so-called post-processing Galerkin method of Garcia-Archilla, Novo and Titi.
The proposed methodology can express the ROMs in terms of (a) theoretical (Fourier coefficients), (b) linear data-driven (POD modes) and/or (c) nonlinear data-driven (Diffusion Maps) coordinates.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study presents a collection of purely data-driven workflows for
constructing reduced-order models (ROMs) for distributed dynamical systems. The
ROMs we focus on, are data-assisted models inspired by, and templated upon, the
theory of Approximate Inertial Manifolds (AIMs); the particular motivation is
the so-called post-processing Galerkin method of Garcia-Archilla, Novo and
Titi. Its applicability can be extended: the need for accurate truncated
Galerkin projections and for deriving closed-formed corrections can be
circumvented using machine learning tools. When the right latent variables are
not a priori known, we illustrate how autoencoders as well as Diffusion Maps (a
manifold learning scheme) can be used to discover good sets of latent variables
and test their explainability. The proposed methodology can express the ROMs in
terms of (a) theoretical (Fourier coefficients), (b) linear data-driven (POD
modes) and/or (c) nonlinear data-driven (Diffusion Maps) coordinates. Both
Black-Box and (theoretically-informed and data-corrected) Gray-Box models are
described; the necessity for the latter arises when truncated Galerkin
projections are so inaccurate as to not be amenable to post-processing. We use
the Chafee-Infante reaction-diffusion and the Kuramoto-Sivashinsky dissipative
partial differential equations to illustrate and successfully test the overall
framework.
Related papers
- PHDME: Physics-Informed Diffusion Models without Explicit Governing Equations [0.496981595868944]
Diffusion models provide expressive priors for forecasting trajectories of dynamical systems, but are typically unreliable in the sparse data regime.<n>We introduce textbfPHDME, a port-Hamiltonian diffusion framework designed for emphsparse observations and emphincomplete physics.<n>Experiments on PDE benchmarks and a real-world spring system show improved accuracy and physical consistency under data scarcity.
arXiv Detail & Related papers (2026-01-29T03:53:48Z) - What's the score? Automated Denoising Score Matching for Nonlinear Diffusions [25.062104976775448]
Reversing a diffusion process by learning its score forms the heart of diffusion-based generative modeling.
We introduce a family of tractable denoising score matching objectives, called local-DSM.
We show how local-DSM melded with Taylor expansions enables automated training and score estimation with nonlinear diffusion processes.
arXiv Detail & Related papers (2024-07-10T19:02:19Z) - Diffusion models for Gaussian distributions: Exact solutions and Wasserstein errors [0.0]
Diffusion or score-based models recently showed high performance in image generation.
We study theoretically the behavior of diffusion models and their numerical implementation when the data distribution is Gaussian.
arXiv Detail & Related papers (2024-05-23T07:28:56Z) - Leveraging viscous Hamilton-Jacobi PDEs for uncertainty quantification in scientific machine learning [1.8175282137722093]
Uncertainty (UQ) in scientific machine learning (SciML) combines the powerful predictive power of SciML with methods for quantifying the reliability of the learned models.
We provide a new interpretation for UQ problems by establishing a new theoretical connection between some Bayesian inference problems arising in SciML and viscous Hamilton-Jacobi partial differential equations (HJ PDEs)
We develop a new Riccati-based methodology that provides computational advantages when continuously updating the model predictions.
arXiv Detail & Related papers (2024-04-12T20:54:01Z) - Diffusion models for probabilistic programming [56.47577824219207]
Diffusion Model Variational Inference (DMVI) is a novel method for automated approximate inference in probabilistic programming languages (PPLs)
DMVI is easy to implement, allows hassle-free inference in PPLs without the drawbacks of, e.g., variational inference using normalizing flows, and does not make any constraints on the underlying neural network model.
arXiv Detail & Related papers (2023-11-01T12:17:05Z) - HyperSINDy: Deep Generative Modeling of Nonlinear Stochastic Governing
Equations [5.279268784803583]
We introduce HyperSINDy, a framework for modeling dynamics via a deep generative model of sparse governing equations from data.
Once trained, HyperSINDy generates dynamics via a differential equation whose coefficients are driven by a white noise.
In experiments, HyperSINDy recovers ground truth governing equations, with learnedity scaling to match that of the data.
arXiv Detail & Related papers (2023-10-07T14:41:59Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - The Manifold Hypothesis for Gradient-Based Explanations [55.01671263121624]
gradient-based explanation algorithms provide perceptually-aligned explanations.
We show that the more a feature attribution is aligned with the tangent space of the data, the more perceptually-aligned it tends to be.
We suggest that explanation algorithms should actively strive to align their explanations with the data manifold.
arXiv Detail & Related papers (2022-06-15T08:49:24Z) - Non-linear manifold ROM with Convolutional Autoencoders and Reduced
Over-Collocation method [0.0]
Non-affine parametric dependencies, nonlinearities and advection-dominated regimes of the model of interest can result in a slow Kolmogorov n-width decay.
We implement the non-linear manifold method introduced by Carlberg et al [37] with hyper-reduction achieved through reduced over-collocation and teacher-student training of a reduced decoder.
We test the methodology on a 2d non-linear conservation law and a 2d shallow water models, and compare the results obtained with a purely data-driven method for which the dynamics is evolved in time with a long-short term memory network
arXiv Detail & Related papers (2022-03-01T11:16:50Z) - Discrete Denoising Flows [87.44537620217673]
We introduce a new discrete flow-based model for categorical random variables: Discrete Denoising Flows (DDFs)
In contrast with other discrete flow-based models, our model can be locally trained without introducing gradient bias.
We show that DDFs outperform Discrete Flows on modeling a toy example, binary MNIST and Cityscapes segmentation maps, measured in log-likelihood.
arXiv Detail & Related papers (2021-07-24T14:47:22Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Gaussian processes meet NeuralODEs: A Bayesian framework for learning
the dynamics of partially observed systems from scarce and noisy data [0.0]
This paper presents a machine learning framework (GP-NODE) for Bayesian systems identification from partial, noisy and irregular observations of nonlinear dynamical systems.
The proposed method takes advantage of recent developments in differentiable programming to propagate gradient information through ordinary differential equation solvers.
A series of numerical studies is presented to demonstrate the effectiveness of the proposed GP-NODE method including predator-prey systems, systems biology, and a 50-dimensional human motion dynamical system.
arXiv Detail & Related papers (2021-03-04T23:42:14Z) - Graph Embedding with Data Uncertainty [113.39838145450007]
spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.
Most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty.
arXiv Detail & Related papers (2020-09-01T15:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.