LQResNet: A Deep Neural Network Architecture for Learning Dynamic
Processes
- URL: http://arxiv.org/abs/2103.02249v1
- Date: Wed, 3 Mar 2021 08:19:43 GMT
- Title: LQResNet: A Deep Neural Network Architecture for Learning Dynamic
Processes
- Authors: Pawan Goyal and Peter Benner
- Abstract summary: A data-driven approach, namely operator inference framework, models a dynamic process.
We suggest combining the operator inference with certain deep neural network approaches to infer the unknown nonlinear dynamics of the system.
- Score: 9.36739413306697
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mathematical modeling is an essential step, for example, to analyze the
transient behavior of a dynamical process and to perform engineering studies
such as optimization and control. With the help of first-principles and expert
knowledge, a dynamic model can be built, but for complex dynamic processes,
appearing, e.g., in biology, chemical plants, neuroscience, financial markets,
this often remains an onerous task. Hence, data-driven modeling of the dynamics
process becomes an attractive choice and is supported by the rapid advancement
in sensor and measurement technology. A data-driven approach, namely operator
inference framework, models a dynamic process, where a particular structure of
the nonlinear term is assumed. In this work, we suggest combining the operator
inference with certain deep neural network approaches to infer the unknown
nonlinear dynamics of the system. The approach uses recent advancements in deep
learning and possible prior knowledge of the process if possible. We also
briefly discuss several extensions and advantages of the proposed methodology.
We demonstrate that the proposed methodology accomplishes the desired tasks for
dynamics processes encountered in neural dynamics and the glycolytic
oscillator.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Modeling Latent Neural Dynamics with Gaussian Process Switching Linear Dynamical Systems [2.170477444239546]
We develop an approach that balances these two objectives: the Gaussian Process Switching Linear Dynamical System (gpSLDS)
Our method builds on previous work modeling the latent state evolution via a differential equation whose nonlinear dynamics are described by a Gaussian process (GP-SDEs)
Our approach resolves key limitations of the rSLDS such as artifactual oscillations in dynamics near discrete state boundaries, while also providing posterior uncertainty estimates of the dynamics.
arXiv Detail & Related papers (2024-07-19T15:32:15Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Deep Learning-based Analysis of Basins of Attraction [49.812879456944984]
This research addresses the challenge of characterizing the complexity and unpredictability of basins within various dynamical systems.
The main focus is on demonstrating the efficiency of convolutional neural networks (CNNs) in this field.
arXiv Detail & Related papers (2023-09-27T15:41:12Z) - Learning low-dimensional dynamics from whole-brain data improves task
capture [2.82277518679026]
We introduce a novel approach to learning low-dimensional approximations of neural dynamics by using a sequential variational autoencoder (SVAE)
Our method finds smooth dynamics that can predict cognitive processes with accuracy higher than classical methods.
We evaluate our approach on various task-fMRI datasets, including motor, working memory, and relational processing tasks.
arXiv Detail & Related papers (2023-05-18T18:43:13Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Decomposed Linear Dynamical Systems (dLDS) for learning the latent
components of neural dynamics [6.829711787905569]
We propose a new decomposed dynamical system model that represents complex non-stationary and nonlinear dynamics of time series data.
Our model is trained through a dictionary learning procedure, where we leverage recent results in tracking sparse vectors over time.
In both continuous-time and discrete-time instructional examples we demonstrate that our model can well approximate the original system.
arXiv Detail & Related papers (2022-06-07T02:25:38Z) - Gradient-Based Trajectory Optimization With Learned Dynamics [80.41791191022139]
We use machine learning techniques to learn a differentiable dynamics model of the system from data.
We show that a neural network can model highly nonlinear behaviors accurately for large time horizons.
In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car.
arXiv Detail & Related papers (2022-04-09T22:07:34Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Planning from Images with Deep Latent Gaussian Process Dynamics [2.924868086534434]
Planning is a powerful approach to control problems with known environment dynamics.
In unknown environments the agent needs to learn a model of the system dynamics to make planning applicable.
We propose to learn a deep latent Gaussian process dynamics (DLGPD) model that learns low-dimensional system dynamics from environment interactions with visual observations.
arXiv Detail & Related papers (2020-05-07T21:29:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.