LINOCS: Lookahead Inference of Networked Operators for Continuous Stability
- URL: http://arxiv.org/abs/2404.18267v1
- Date: Sun, 28 Apr 2024 18:16:58 GMT
- Title: LINOCS: Lookahead Inference of Networked Operators for Continuous Stability
- Authors: Noga Mudrik, Eva Yezerets, Yenho Chen, Christopher Rozell, Adam Charles,
- Abstract summary: We introduce Lookahead-driven Inference of Networked Operators for Continuous Stability (LINOCS)
LINOCS is a robust learning procedure for identifying hidden dynamical interactions in noisy time-series data.
We demonstrate LINOCS' ability to recover the ground truth dynamical operators underlying synthetic time-series data.
- Score: 4.508868068781057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Identifying latent interactions within complex systems is key to unlocking deeper insights into their operational dynamics, including how their elements affect each other and contribute to the overall system behavior. For instance, in neuroscience, discovering neuron-to-neuron interactions is essential for understanding brain function; in ecology, recognizing the interactions among populations is key for understanding complex ecosystems. Such systems, often modeled as dynamical systems, typically exhibit noisy high-dimensional and non-stationary temporal behavior that renders their identification challenging. Existing dynamical system identification methods often yield operators that accurately capture short-term behavior but fail to predict long-term trends, suggesting an incomplete capture of the underlying process. Methods that consider extended forecasts (e.g., recurrent neural networks) lack explicit representations of element-wise interactions and require substantial training data, thereby failing to capture interpretable network operators. Here we introduce Lookahead-driven Inference of Networked Operators for Continuous Stability (LINOCS), a robust learning procedure for identifying hidden dynamical interactions in noisy time-series data. LINOCS integrates several multi-step predictions with adaptive weights during training to recover dynamical operators that can yield accurate long-term predictions. We demonstrate LINOCS' ability to recover the ground truth dynamical operators underlying synthetic time-series data for multiple dynamical systems models (including linear, piece-wise linear, time-changing linear systems' decomposition, and regularized linear time-varying systems) as well as its capability to produce meaningful operators with robust reconstructions through various real-world examples.
Related papers
- Inferring the time-varying coupling of dynamical systems with temporal convolutional autoencoders [0.0]
We introduce Temporal Autoencoders for Causal Inference (TACI)
TACI combines a new surrogate data metric for assessing causal interactions with a novel two-headed machine learning architecture.
We demonstrate TACI's ability to accurately quantify dynamic causal interactions across a variety of systems.
arXiv Detail & Related papers (2024-06-05T12:51:20Z) - Learning Locally Interacting Discrete Dynamical Systems: Towards Data-Efficient and Scalable Prediction [16.972017028598597]
Local dynamical systems exhibit complex global dynamics from local, relatively simple, and often interactions between dynamic elements.
We present Attentive Recurrent Cellular Automata (AR-NCA), to effectively discover unknown local state transition rules.
AR-NCA exhibits the superior generalizability across various system configurations.
arXiv Detail & Related papers (2024-04-09T17:00:43Z) - TimeGraphs: Graph-based Temporal Reasoning [64.18083371645956]
TimeGraphs is a novel approach that characterizes dynamic interactions as a hierarchical temporal graph.
Our approach models the interactions using a compact graph-based representation, enabling adaptive reasoning across diverse time scales.
We evaluate TimeGraphs on multiple datasets with complex, dynamic agent interactions, including a football simulator, the Resistance game, and the MOMA human activity dataset.
arXiv Detail & Related papers (2024-01-06T06:26:49Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Leveraging Neural Koopman Operators to Learn Continuous Representations
of Dynamical Systems from Scarce Data [0.0]
We propose a new deep Koopman framework that represents dynamics in an intrinsically continuous way.
This framework leads to better performance on limited training data.
arXiv Detail & Related papers (2023-03-13T10:16:19Z) - Critical Learning Periods for Multisensory Integration in Deep Networks [112.40005682521638]
We show that the ability of a neural network to integrate information from diverse sources hinges critically on being exposed to properly correlated signals during the early phases of training.
We show that critical periods arise from the complex and unstable early transient dynamics, which are decisive of final performance of the trained system and their learned representations.
arXiv Detail & Related papers (2022-10-06T23:50:38Z) - Decomposed Linear Dynamical Systems (dLDS) for learning the latent
components of neural dynamics [6.829711787905569]
We propose a new decomposed dynamical system model that represents complex non-stationary and nonlinear dynamics of time series data.
Our model is trained through a dictionary learning procedure, where we leverage recent results in tracking sparse vectors over time.
In both continuous-time and discrete-time instructional examples we demonstrate that our model can well approximate the original system.
arXiv Detail & Related papers (2022-06-07T02:25:38Z) - Capturing Actionable Dynamics with Structured Latent Ordinary
Differential Equations [68.62843292346813]
We propose a structured latent ODE model that captures system input variations within its latent representation.
Building on a static variable specification, our model learns factors of variation for each input to the system, thus separating the effects of the system inputs in the latent space.
arXiv Detail & Related papers (2022-02-25T20:00:56Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Divide and Rule: Recurrent Partitioned Network for Dynamic Processes [25.855428321990328]
Many dynamic processes are involved with interacting variables, from physical systems to sociological analysis.
Our goal is to represent a system with a part-whole hierarchy and discover the implied dependencies among intra-system variables.
The proposed architecture consists of (i) a perceptive module that extracts a hierarchical and temporally consistent representation of the observation at multiple levels, (ii) a deductive module for determining the relational connection between neurons at each level, and (iii) a statistical module that can predict the future by conditioning on the temporal distributional estimation.
arXiv Detail & Related papers (2021-06-01T06:45:56Z) - Continuous-in-Depth Neural Networks [107.47887213490134]
We first show that ResNets fail to be meaningful dynamical in this richer sense.
We then demonstrate that neural network models can learn to represent continuous dynamical systems.
We introduce ContinuousNet as a continuous-in-depth generalization of ResNet architectures.
arXiv Detail & Related papers (2020-08-05T22:54:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.