On the latent dimension of deep autoencoders for reduced order modeling
of PDEs parametrized by random fields
- URL: http://arxiv.org/abs/2310.12095v1
- Date: Wed, 18 Oct 2023 16:38:23 GMT
- Title: On the latent dimension of deep autoencoders for reduced order modeling
of PDEs parametrized by random fields
- Authors: Nicola Rares Franco, Daniel Fraulin, Andrea Manzoni and Paolo Zunino
- Abstract summary: This paper provides some theoretical insights about the use of Deep Learning-ROMs in the presence of random fields.
We derive explicit error bounds that can guide domain practitioners when choosing the latent dimension of deep autoencoders.
We evaluate the practical usefulness of our theory by means of numerical experiments.
- Score: 0.6827423171182154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Learning is having a remarkable impact on the design of Reduced Order
Models (ROMs) for Partial Differential Equations (PDEs), where it is exploited
as a powerful tool for tackling complex problems for which classical methods
might fail. In this respect, deep autoencoders play a fundamental role, as they
provide an extremely flexible tool for reducing the dimensionality of a given
problem by leveraging on the nonlinear capabilities of neural networks. Indeed,
starting from this paradigm, several successful approaches have already been
developed, which are here referred to as Deep Learning-based ROMs (DL-ROMs).
Nevertheless, when it comes to stochastic problems parameterized by random
fields, the current understanding of DL-ROMs is mostly based on empirical
evidence: in fact, their theoretical analysis is currently limited to the case
of PDEs depending on a finite number of (deterministic) parameters. The purpose
of this work is to extend the existing literature by providing some theoretical
insights about the use of DL-ROMs in the presence of stochasticity generated by
random fields. In particular, we derive explicit error bounds that can guide
domain practitioners when choosing the latent dimension of deep autoencoders.
We evaluate the practical usefulness of our theory by means of numerical
experiments, showing how our analysis can significantly impact the performance
of DL-ROMs.
Related papers
- Handling geometrical variability in nonlinear reduced order modeling through Continuous Geometry-Aware DL-ROMs [0.6827423171182154]
We propose Continuous Geometry-Aware DL-ROMs (CGA-DL-ROMs) for geometrically parametrized problems.
CGA-DL-ROMs are endowed with a strong inductive bias that makes them aware of geometrical parametrizations.
arXiv Detail & Related papers (2024-11-08T11:32:33Z) - Physics-informed Discretization-independent Deep Compositional Operator Network [1.2430809884830318]
We introduce a novel physics-informed model architecture which can generalize to various discrete representations of PDE parameters and irregular domain shapes.
Inspired by deep operator neural networks, our model involves a discretization-independent learning of parameter embedding repeatedly.
Numerical results demonstrate the accuracy and efficiency of the proposed method.
arXiv Detail & Related papers (2024-04-21T12:41:30Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - A practical existence theorem for reduced order models based on convolutional autoencoders [0.4604003661048266]
Deep learning has gained increasing popularity in the fields of Partial Differential Equations (PDEs) and Reduced Order Modeling (ROM)
CNN-based autoencoders have proven extremely effective, outperforming established techniques, such as the reduced basis method, when dealing with complex nonlinear problems.
We provide a new practical existence theorem for CNN-based autoencoders when the parameter-to-solution map is holomorphic.
arXiv Detail & Related papers (2024-02-01T09:01:58Z) - Higher-order topological kernels via quantum computation [68.8204255655161]
Topological data analysis (TDA) has emerged as a powerful tool for extracting meaningful insights from complex data.
We propose a quantum approach to defining Betti kernels, which is based on constructing Betti curves with increasing order.
arXiv Detail & Related papers (2023-07-14T14:48:52Z) - Neural network analysis of neutron and X-ray reflectivity data:
Incorporating prior knowledge for tackling the phase problem [141.5628276096321]
We present an approach that utilizes prior knowledge to regularize the training process over larger parameter spaces.
We demonstrate the effectiveness of our method in various scenarios, including multilayer structures with box model parameterization.
In contrast to previous methods, our approach scales favorably when increasing the complexity of the inverse problem.
arXiv Detail & Related papers (2023-06-28T11:15:53Z) - Adaptive Log-Euclidean Metrics for SPD Matrix Learning [73.12655932115881]
We propose Adaptive Log-Euclidean Metrics (ALEMs), which extend the widely used Log-Euclidean Metric (LEM)
The experimental and theoretical results demonstrate the merit of the proposed metrics in improving the performance of SPD neural networks.
arXiv Detail & Related papers (2023-03-26T18:31:52Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - A Deep Learning approach to Reduced Order Modelling of Parameter
Dependent Partial Differential Equations [0.2148535041822524]
We develop a constructive approach based on Deep Neural Networks for the efficient approximation of the parameter-to-solution map.
In particular, we consider parametrized advection-diffusion PDEs, and we test the methodology in the presence of strong transport fields.
arXiv Detail & Related papers (2021-03-10T17:01:42Z) - POD-DL-ROM: enhancing deep learning-based reduced order models for
nonlinear parametrized PDEs by proper orthogonal decomposition [0.0]
Deep learning-based reduced order models (DL-ROMs) have been recently proposed to overcome common limitations shared by conventional reduced order models (ROMs)
In this paper we propose a possible way to avoid an expensive training stage of DL-ROMs, by (i) performing a prior dimensionality reduction through POD, and (ii) relying on a multi-fidelity pretraining stage.
The proposed POD-DL-ROM is tested on several (both scalar and vector, linear and nonlinear) time-dependent parametrized PDEs.
arXiv Detail & Related papers (2021-01-28T07:34:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.