Machine Learning Enhanced Hankel Dynamic-Mode Decomposition
- URL: http://arxiv.org/abs/2303.06289v3
- Date: Tue, 18 Jul 2023 17:39:49 GMT
- Title: Machine Learning Enhanced Hankel Dynamic-Mode Decomposition
- Authors: Christopher W. Curtis, D. Jay Alford-Lago, Erik Bollt, Andrew Tuma
- Abstract summary: We develop a deep learning DMD based method to build an adaptive learning scheme that better approximates higher dimensional and chaotic dynamics.
This appears to be a key feature in enhancing the DMD overall, and it should help provide further insight for developing other deep learning methods for time series analysis and model generation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While the acquisition of time series has become more straightforward,
developing dynamical models from time series is still a challenging and
evolving problem domain. Within the last several years, to address this
problem, there has been a merging of machine learning tools with what is called
the dynamic mode decomposition (DMD). This general approach has been shown to
be an especially promising avenue for accurate model development. Building on
this prior body of work, we develop a deep learning DMD based method which
makes use of the fundamental insight of Takens' Embedding Theorem to build an
adaptive learning scheme that better approximates higher dimensional and
chaotic dynamics. We call this method the Deep Learning Hankel DMD (DLHDMD). We
likewise explore how our method learns mappings which tend, after successful
training, to significantly change the mutual information between dimensions in
the dynamics. This appears to be a key feature in enhancing the DMD overall,
and it should help provide further insight for developing other deep learning
methods for time series analysis and model generation.
Related papers
- LETS Forecast: Learning Embedology for Time Series Forecasting [8.05466205230466]
We introduce DeepEDM, a framework that integrates nonlinear dynamical systems modeling with deep neural networks.<n>Inspired by empirical dynamic modeling (EDM) and rooted in Takens' theorem, DeepEDM presents a novel deep model that learns a latent space from time-delayed embeddings.<n>Our results show that DeepEDM is robust to input noise, and outperforms state-of-the-art methods in forecasting accuracy.
arXiv Detail & Related papers (2025-06-06T18:24:12Z) - Grassmannian Geometry Meets Dynamic Mode Decomposition in DMD-GEN: A New Metric for Mode Collapse in Time Series Generative Models [0.0]
Generative models like Generative Adversarial Networks (GANs) and Variational Autoencoders (Es) often fail to capture the full diversity of their training data, leading to mode collapse.
We introduce a new definition of mode collapse specific to time series and propose a novel metric, DMD-GEN, to quantify its severity.
arXiv Detail & Related papers (2024-12-15T19:53:17Z) - Learning System Dynamics without Forgetting [60.08612207170659]
Predicting trajectories of systems with unknown dynamics is crucial in various research fields, including physics and biology.
We present a novel framework of Mode-switching Graph ODE (MS-GODE), which can continually learn varying dynamics.
We construct a novel benchmark of biological dynamic systems, featuring diverse systems with disparate dynamics.
arXiv Detail & Related papers (2024-06-30T14:55:18Z) - Entropic Regression DMD (ERDMD) Discovers Informative Sparse and Nonuniformly Time Delayed Models [0.0]
We present a method which determines optimal multi-step dynamic mode decomposition models via entropic regression.
We develop a method that produces high fidelity time-delay DMD models that allow for nonuniform time space.
These models are shown to be highly efficient and robust.
arXiv Detail & Related papers (2024-06-17T20:02:43Z) - A New View on Planning in Online Reinforcement Learning [19.35031543927374]
This paper investigates a new approach to model-based reinforcement learning using background planning.
We show that our GSP algorithm can propagate value from an abstract space in a manner that helps a variety of base learners learn significantly faster in different domains.
arXiv Detail & Related papers (2024-06-03T17:45:19Z) - DynaMMo: Dynamic Model Merging for Efficient Class Incremental Learning for Medical Images [0.8213829427624407]
Continual learning, the ability to acquire knowledge from new data while retaining previously learned information, is a fundamental challenge in machine learning.
We propose Dynamic Model Merging, DynaMMo, a method that merges multiple networks at different stages of model training to achieve better computational efficiency.
We evaluate DynaMMo on three publicly available datasets, demonstrating its effectiveness compared to existing approaches.
arXiv Detail & Related papers (2024-04-22T11:37:35Z) - Predictive Experience Replay for Continual Visual Control and
Forecasting [62.06183102362871]
We present a new continual learning approach for visual dynamics modeling and explore its efficacy in visual control and forecasting.
We first propose the mixture world model that learns task-specific dynamics priors with a mixture of Gaussians, and then introduce a new training strategy to overcome catastrophic forgetting.
Our model remarkably outperforms the naive combinations of existing continual learning and visual RL algorithms on DeepMind Control and Meta-World benchmarks with continual visual control tasks.
arXiv Detail & Related papers (2023-03-12T05:08:03Z) - Learning Differential Operators for Interpretable Time Series Modeling [34.32259687441212]
We propose a learning framework that can automatically obtain interpretable PDE models from sequential data.
Our model can provide valuable interpretability and achieve comparable performance to state-of-the-art models.
arXiv Detail & Related papers (2022-09-03T20:14:31Z) - Which priors matter? Benchmarking models for learning latent dynamics [70.88999063639146]
Several methods have proposed to integrate priors from classical mechanics into machine learning models.
We take a sober look at the current capabilities of these models.
We find that the use of continuous and time-reversible dynamics benefits models of all classes.
arXiv Detail & Related papers (2021-11-09T23:48:21Z) - Dynamic Mode Decomposition in Adaptive Mesh Refinement and Coarsening
Simulations [58.720142291102135]
Dynamic Mode Decomposition (DMD) is a powerful data-driven method used to extract coherent schemes.
This paper proposes a strategy to enable DMD to extract from observations with different mesh topologies and dimensions.
arXiv Detail & Related papers (2021-04-28T22:14:25Z) - GEM: Group Enhanced Model for Learning Dynamical Control Systems [78.56159072162103]
We build effective dynamical models that are amenable to sample-based learning.
We show that learning the dynamics on a Lie algebra vector space is more effective than learning a direct state transition model.
This work sheds light on a connection between learning of dynamics and Lie group properties, which opens doors for new research directions.
arXiv Detail & Related papers (2021-04-07T01:08:18Z) - Trajectory-wise Multiple Choice Learning for Dynamics Generalization in
Reinforcement Learning [137.39196753245105]
We present a new model-based reinforcement learning algorithm that learns a multi-headed dynamics model for dynamics generalization.
We incorporate context learning, which encodes dynamics-specific information from past experiences into the context latent vector.
Our method exhibits superior zero-shot generalization performance across a variety of control tasks, compared to state-of-the-art RL methods.
arXiv Detail & Related papers (2020-10-26T03:20:42Z) - Accelerating Training in Artificial Neural Networks with Dynamic Mode
Decomposition [0.0]
We propose a method to decouple the evaluation of the update rule at each weight.
By fine-tuning the number of backpropagation steps used for each DMD model estimation, a significant reduction in the number of operations required to train the neural networks can be achieved.
arXiv Detail & Related papers (2020-06-18T22:59:55Z) - A Comprehensive Study on Temporal Modeling for Online Action Detection [50.558313106389335]
Online action detection (OAD) is a practical yet challenging task, which has attracted increasing attention in recent years.
This paper aims to provide a comprehensive study on temporal modeling for OAD including four meta types of temporal modeling methods.
We present several hybrid temporal modeling methods, which outperform the recent state-of-the-art methods with sizable margins on THUMOS-14 and TVSeries.
arXiv Detail & Related papers (2020-01-21T13:12:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.