Hierarchical Deep Learning of Multiscale Differential Equation
Time-Steppers
- URL: http://arxiv.org/abs/2008.09768v1
- Date: Sat, 22 Aug 2020 07:16:53 GMT
- Title: Hierarchical Deep Learning of Multiscale Differential Equation
Time-Steppers
- Authors: Yuying Liu, J. Nathan Kutz, Steven L. Brunton
- Abstract summary: We develop a hierarchy of deep neural network time-steppers to approximate the flow map of the dynamical system over a disparate range of time-scales.
The resulting model is purely data-driven and leverages features of the multiscale dynamics.
We benchmark our algorithm against state-of-the-art methods, such as LSTM, reservoir computing, and clockwork RNN.
- Score: 5.6385744392820465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nonlinear differential equations rarely admit closed-form solutions, thus
requiring numerical time-stepping algorithms to approximate solutions. Further,
many systems characterized by multiscale physics exhibit dynamics over a vast
range of timescales, making numerical integration computationally expensive due
to numerical stiffness. In this work, we develop a hierarchy of deep neural
network time-steppers to approximate the flow map of the dynamical system over
a disparate range of time-scales. The resulting model is purely data-driven and
leverages features of the multiscale dynamics, enabling numerical integration
and forecasting that is both accurate and highly efficient. Moreover, similar
ideas can be used to couple neural network-based models with classical
numerical time-steppers. Our multiscale hierarchical time-stepping scheme
provides important advantages over current time-stepping algorithms, including
(i) circumventing numerical stiffness due to disparate time-scales, (ii)
improved accuracy in comparison with leading neural-network architectures,
(iii) efficiency in long-time simulation/forecasting due to explicit training
of slow time-scale dynamics, and (iv) a flexible framework that is
parallelizable and may be integrated with standard numerical time-stepping
algorithms. The method is demonstrated on a wide range of nonlinear dynamical
systems, including the Van der Pol oscillator, the Lorenz system, the
Kuramoto-Sivashinsky equation, and fluid flow pass a cylinder; audio and video
signals are also explored. On the sequence generation examples, we benchmark
our algorithm against state-of-the-art methods, such as LSTM, reservoir
computing, and clockwork RNN. Despite the structural simplicity of our method,
it outperforms competing methods on numerical integration.
Related papers
- Enhancing Computational Efficiency in Multiscale Systems Using Deep Learning of Coordinates and Flow Maps [0.0]
This paper showcases how deep learning techniques can be used to develop a precise time-stepping approach for multiscale systems.
The resulting framework achieves state-of-the-art predictive accuracy while incurring lesser computational costs.
arXiv Detail & Related papers (2024-04-28T14:05:13Z) - Neural Dynamical Operator: Continuous Spatial-Temporal Model with Gradient-Based and Derivative-Free Optimization Methods [0.0]
We present a data-driven modeling framework called neural dynamical operator that is continuous in both space and time.
A key feature of the neural dynamical operator is the resolution-invariance with respect to both spatial and temporal discretizations.
We show that the proposed model can better predict long-term statistics via the hybrid optimization scheme.
arXiv Detail & Related papers (2023-11-20T14:31:18Z) - Hierarchical deep learning-based adaptive time-stepping scheme for
multiscale simulations [0.0]
This study proposes a new method for simulating multiscale problems using deep neural networks.
By leveraging the hierarchical learning of neural network time steppers, the method adapts time steps to approximate dynamical system flow maps across timescales.
This approach achieves state-of-the-art performance in less computational time compared to fixed-step neural network solvers.
arXiv Detail & Related papers (2023-11-10T09:47:58Z) - On Fast Simulation of Dynamical System with Neural Vector Enhanced
Numerical Solver [59.13397937903832]
We introduce a deep learning-based corrector called Neural Vector (NeurVec)
NeurVec can compensate for integration errors and enable larger time step sizes in simulations.
Our experiments on a variety of complex dynamical system benchmarks demonstrate that NeurVec exhibits remarkable generalization capability.
arXiv Detail & Related papers (2022-08-07T09:02:18Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Learning effective dynamics from data-driven stochastic systems [2.4578723416255754]
This work is devoted to investigating the effective dynamics for slow-fast dynamical systems.
We propose a novel algorithm including a neural network called Auto-SDE to learn in slow manifold.
arXiv Detail & Related papers (2022-05-09T09:56:58Z) - DiffPD: Differentiable Projective Dynamics with Contact [65.88720481593118]
We present DiffPD, an efficient differentiable soft-body simulator with implicit time integration.
We evaluate the performance of DiffPD and observe a speedup of 4-19 times compared to the standard Newton's method in various applications.
arXiv Detail & Related papers (2021-01-15T00:13:33Z) - Fast and differentiable simulation of driven quantum systems [58.720142291102135]
We introduce a semi-analytic method based on the Dyson expansion that allows us to time-evolve driven quantum systems much faster than standard numerical methods.
We show results of the optimization of a two-qubit gate using transmon qubits in the circuit QED architecture.
arXiv Detail & Related papers (2020-12-16T21:43:38Z) - Continuous-in-Depth Neural Networks [107.47887213490134]
We first show that ResNets fail to be meaningful dynamical in this richer sense.
We then demonstrate that neural network models can learn to represent continuous dynamical systems.
We introduce ContinuousNet as a continuous-in-depth generalization of ResNet architectures.
arXiv Detail & Related papers (2020-08-05T22:54:09Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.