Hierarchical deep learning-based adaptive time-stepping scheme for
multiscale simulations
- URL: http://arxiv.org/abs/2311.05961v1
- Date: Fri, 10 Nov 2023 09:47:58 GMT
- Title: Hierarchical deep learning-based adaptive time-stepping scheme for
multiscale simulations
- Authors: Asif Hamid, Danish Rafiq, Shahkar Ahmad Nahvi, Mohammad Abid Bazaz
- Abstract summary: This study proposes a new method for simulating multiscale problems using deep neural networks.
By leveraging the hierarchical learning of neural network time steppers, the method adapts time steps to approximate dynamical system flow maps across timescales.
This approach achieves state-of-the-art performance in less computational time compared to fixed-step neural network solvers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multiscale is a hallmark feature of complex nonlinear systems. While the
simulation using the classical numerical methods is restricted by the local
\textit{Taylor} series constraints, the multiscale techniques are often limited
by finding heuristic closures. This study proposes a new method for simulating
multiscale problems using deep neural networks. By leveraging the hierarchical
learning of neural network time steppers, the method adapts time steps to
approximate dynamical system flow maps across timescales. This approach
achieves state-of-the-art performance in less computational time compared to
fixed-step neural network solvers. The proposed method is demonstrated on
several nonlinear dynamical systems, and source codes are provided for
implementation. This method has the potential to benefit multiscale analysis of
complex systems and encourage further investigation in this area.
Related papers
- Gradient-free training of recurrent neural networks [3.272216546040443]
We introduce a computational approach to construct all weights and biases of a recurrent neural network without using gradient-based methods.
The approach is based on a combination of random feature networks and Koopman operator theory for dynamical systems.
In computational experiments on time series, forecasting for chaotic dynamical systems, and control problems, we observe that the training time and forecasting accuracy of the recurrent neural networks we construct are improved.
arXiv Detail & Related papers (2024-10-30T21:24:34Z) - Enhancing Computational Efficiency in Multiscale Systems Using Deep Learning of Coordinates and Flow Maps [0.0]
This paper showcases how deep learning techniques can be used to develop a precise time-stepping approach for multiscale systems.
The resulting framework achieves state-of-the-art predictive accuracy while incurring lesser computational costs.
arXiv Detail & Related papers (2024-04-28T14:05:13Z) - Self-STORM: Deep Unrolled Self-Supervised Learning for Super-Resolution Microscopy [55.2480439325792]
We introduce deep unrolled self-supervised learning, which alleviates the need for such data by training a sequence-specific, model-based autoencoder.
Our proposed method exceeds the performance of its supervised counterparts.
arXiv Detail & Related papers (2024-03-25T17:40:32Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Multisymplectic Formulation of Deep Learning Using Mean--Field Type
Control and Nonlinear Stability of Training Algorithm [0.0]
We formulate training of deep neural networks as a hydrodynamics system with a multisymplectic structure.
For that, the deep neural network is modelled using a differential equation and, thereby, mean-field type control is used to train it.
The numerical scheme, yields an approximated solution which is also an exact solution of a hydrodynamics system with a multisymplectic structure.
arXiv Detail & Related papers (2022-07-07T23:14:12Z) - A Deep Gradient Correction Method for Iteratively Solving Linear Systems [5.744903762364991]
We present a novel approach to approximate the solution of large, sparse, symmetric, positive-definite linear systems of equations.
Our algorithm is capable of reducing the linear system residual to a given tolerance in a small number of iterations.
arXiv Detail & Related papers (2022-05-22T06:40:38Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Accelerated Policy Learning with Parallel Differentiable Simulation [59.665651562534755]
We present a differentiable simulator and a new policy learning algorithm (SHAC)
Our algorithm alleviates problems with local minima through a smooth critic function.
We show substantial improvements in sample efficiency and wall-clock time over state-of-the-art RL and differentiable simulation-based algorithms.
arXiv Detail & Related papers (2022-04-14T17:46:26Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Hierarchical Deep Learning of Multiscale Differential Equation
Time-Steppers [5.6385744392820465]
We develop a hierarchy of deep neural network time-steppers to approximate the flow map of the dynamical system over a disparate range of time-scales.
The resulting model is purely data-driven and leverages features of the multiscale dynamics.
We benchmark our algorithm against state-of-the-art methods, such as LSTM, reservoir computing, and clockwork RNN.
arXiv Detail & Related papers (2020-08-22T07:16:53Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.