Uncovering the Origins of Instability in Dynamical Systems: How
Attention Mechanism Can Help?
- URL: http://arxiv.org/abs/2212.09641v1
- Date: Mon, 19 Dec 2022 17:16:41 GMT
- Title: Uncovering the Origins of Instability in Dynamical Systems: How
Attention Mechanism Can Help?
- Authors: Nooshin Bahador, Milad Lankarany
- Abstract summary: We show that attention should be directed toward the collective behaviour of imbalanced structures and polarity-driven structural instabilities within the network.
Our study provides a proof of concept to understand why perturbing some nodes of a network may cause dramatic changes in the network dynamics.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The behavior of the network and its stability are governed by both dynamics
of individual nodes as well as their topological interconnections. Attention
mechanism as an integral part of neural network models was initially designed
for natural language processing (NLP), and so far, has shown excellent
performance in combining dynamics of individual nodes and the coupling
strengths between them within a network. Despite undoubted impact of attention
mechanism, it is not yet clear why some nodes of a network get higher attention
weights. To come up with more explainable solutions, we tried to look at the
problem from stability perspective. Based on stability theory, negative
connections in a network can create feedback loops or other complex structures
by allowing information to flow in the opposite direction. These structures
play a critical role in the dynamics of a complex system and can contribute to
abnormal synchronization, amplification, or suppression. We hypothesized that
those nodes that are involved in organizing such structures can push the entire
network into instability modes and therefore need higher attention during
analysis. To test this hypothesis, attention mechanism along with spectral and
topological stability analyses was performed on a real-world numerical problem,
i.e., a linear Multi Input Multi Output state-space model of a piezoelectric
tube actuator. The findings of our study suggest that the attention should be
directed toward the collective behaviour of imbalanced structures and
polarity-driven structural instabilities within the network. The results
demonstrated that the nodes receiving more attention cause more instability in
the system. Our study provides a proof of concept to understand why perturbing
some nodes of a network may cause dramatic changes in the network dynamics.
Related papers
- Neural Networks Decoded: Targeted and Robust Analysis of Neural Network Decisions via Causal Explanations and Reasoning [9.947555560412397]
We introduce TRACER, a novel method grounded in causal inference theory to estimate the causal dynamics underpinning DNN decisions.
Our approach systematically intervenes on input features to observe how specific changes propagate through the network, affecting internal activations and final outputs.
TRACER further enhances explainability by generating counterfactuals that reveal possible model biases and offer contrastive explanations for misclassifications.
arXiv Detail & Related papers (2024-10-07T20:44:53Z) - Predicting Instability in Complex Oscillator Networks: Limitations and
Potentials of Network Measures and Machine Learning [0.0]
We collect 46 relevant network measures and find that no small subset can reliably predict stability.
The performance of GNNs can only be matched by combining all network measures and nodewise machine learning.
This suggests that correlations of network measures and function may be misleading, and that GNNs capture the causal relationship between structure and stability substantially better.
arXiv Detail & Related papers (2024-02-27T13:34:08Z) - Quantum-Inspired Analysis of Neural Network Vulnerabilities: The Role of
Conjugate Variables in System Attacks [54.565579874913816]
Neural networks demonstrate inherent vulnerability to small, non-random perturbations, emerging as adversarial attacks.
A mathematical congruence manifests between this mechanism and the quantum physics' uncertainty principle, casting light on a hitherto unanticipated interdisciplinarity.
arXiv Detail & Related papers (2024-02-16T02:11:27Z) - Inferring the Graph of Networked Dynamical Systems under Partial
Observability and Spatially Colored Noise [2.362288417229025]
In a Networked Dynamical System (NDS), each node is a system whose dynamics are coupled with the dynamics of neighboring nodes.
The underlying network is unknown in many applications and should be inferred from observed data.
arXiv Detail & Related papers (2023-12-18T16:19:07Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Input correlations impede suppression of chaos and learning in balanced
rate networks [58.720142291102135]
Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity.
We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, strongly depends on correlations in the input.
arXiv Detail & Related papers (2022-01-24T19:20:49Z) - Latent Equilibrium: A unified learning theory for arbitrarily fast
computation with arbitrarily slow neurons [0.7340017786387767]
We introduce Latent Equilibrium, a new framework for inference and learning in networks of slow components.
We derive disentangled neuron and synapse dynamics from a prospective energy function.
We show how our principle can be applied to detailed models of cortical microcircuitry.
arXiv Detail & Related papers (2021-10-27T16:15:55Z) - Stability of Neural Networks on Manifolds to Relative Perturbations [118.84154142918214]
Graph Neural Networks (GNNs) show impressive performance in many practical scenarios.
GNNs can scale well on large size graphs, but this is contradicted by the fact that existing stability bounds grow with the number of nodes.
arXiv Detail & Related papers (2021-10-10T04:37:19Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Global minimization via classical tunneling assisted by collective force
field formation [3.0938904602244346]
We describe a phenomenon where the increase of dimensions self-consistently generates a force field due to dynamical instabilities.
We dub this collective and nonperturbative effect a "Lyapunov force" which steers the system towards the global minimum of the potential function.
The mechanism is appealing for its physical relevance in nanoscale physics, and to possible applications in optimization, novel Monte Carlo schemes and machine learning.
arXiv Detail & Related papers (2021-02-05T19:09:20Z) - Problems of representation of electrocardiograms in convolutional neural
networks [58.720142291102135]
We show that these problems are systemic in nature.
They are due to how convolutional networks work with composite objects, parts of which are not fixed rigidly, but have significant mobility.
arXiv Detail & Related papers (2020-12-01T14:02:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.