Predicting Instability in Complex Oscillator Networks: Limitations and
Potentials of Network Measures and Machine Learning
- URL: http://arxiv.org/abs/2402.17500v1
- Date: Tue, 27 Feb 2024 13:34:08 GMT
- Title: Predicting Instability in Complex Oscillator Networks: Limitations and
Potentials of Network Measures and Machine Learning
- Authors: Christian Nauck, Michael Lindner, Nora Molkenthin, J\"urgen Kurths,
Eckehard Sch\"oll, J\"org Raisch and Frank Hellmann
- Abstract summary: We collect 46 relevant network measures and find that no small subset can reliably predict stability.
The performance of GNNs can only be matched by combining all network measures and nodewise machine learning.
This suggests that correlations of network measures and function may be misleading, and that GNNs capture the causal relationship between structure and stability substantially better.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A central question of network science is how functional properties of systems
arise from their structure. For networked dynamical systems, structure is
typically quantified with network measures. A functional property that is of
theoretical and practical interest for oscillatory systems is the stability of
synchrony to localized perturbations. Recently, Graph Neural Networks (GNNs)
have been shown to predict this stability successfully; at the same time,
network measures have struggled to paint a clear picture. Here we collect 46
relevant network measures and find that no small subset can reliably predict
stability. The performance of GNNs can only be matched by combining all network
measures and nodewise machine learning. However, unlike GNNs, this approach
fails to extrapolate from network ensembles to several real power grid
topologies. This suggests that correlations of network measures and function
may be misleading, and that GNNs capture the causal relationship between
structure and stability substantially better.
Related papers
- TDNetGen: Empowering Complex Network Resilience Prediction with Generative Augmentation of Topology and Dynamics [14.25304439234864]
We introduce a novel resilience prediction framework for complex networks, designed to tackle this issue through generative data augmentation of network topology and dynamics.
Experiment results on three network datasets demonstrate that our proposed framework TDNetGen can achieve high prediction accuracy up to 85%-95%.
arXiv Detail & Related papers (2024-08-19T09:20:31Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Quantum-Inspired Analysis of Neural Network Vulnerabilities: The Role of
Conjugate Variables in System Attacks [54.565579874913816]
Neural networks demonstrate inherent vulnerability to small, non-random perturbations, emerging as adversarial attacks.
A mathematical congruence manifests between this mechanism and the quantum physics' uncertainty principle, casting light on a hitherto unanticipated interdisciplinarity.
arXiv Detail & Related papers (2024-02-16T02:11:27Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Certified Invertibility in Neural Networks via Mixed-Integer Programming [16.64960701212292]
Neural networks are known to be vulnerable to adversarial attacks.
There may exist large, meaningful perturbations that do not affect the network's decision.
We discuss how our findings can be useful for invertibility certification in transformations between neural networks.
arXiv Detail & Related papers (2023-01-27T15:40:38Z) - Uncovering the Origins of Instability in Dynamical Systems: How
Attention Mechanism Can Help? [0.0]
We show that attention should be directed toward the collective behaviour of imbalanced structures and polarity-driven structural instabilities within the network.
Our study provides a proof of concept to understand why perturbing some nodes of a network may cause dramatic changes in the network dynamics.
arXiv Detail & Related papers (2022-12-19T17:16:41Z) - Vanilla Feedforward Neural Networks as a Discretization of Dynamical Systems [9.382423715831687]
In this paper, we back to the classical network structure and prove that the vanilla feedforward networks could also be a numerical discretization of dynamic systems.
Our results could provide a new perspective for understanding the approximation properties of feedforward neural networks.
arXiv Detail & Related papers (2022-09-22T10:32:08Z) - A Learning Convolutional Neural Network Approach for Network Robustness
Prediction [13.742495880357493]
Network robustness is critical for various societal and industrial networks again malicious attacks.
In this paper, an improved method for network robustness prediction is developed based on learning feature representation using convolutional neural network (LFR-CNN)
In this scheme, higher-dimensional network data are compressed to lower-dimensional representations, and then passed to a CNN to perform robustness prediction.
arXiv Detail & Related papers (2022-03-20T13:45:55Z) - Stability of Neural Networks on Manifolds to Relative Perturbations [118.84154142918214]
Graph Neural Networks (GNNs) show impressive performance in many practical scenarios.
GNNs can scale well on large size graphs, but this is contradicted by the fact that existing stability bounds grow with the number of nodes.
arXiv Detail & Related papers (2021-10-10T04:37:19Z) - Learning Autonomy in Management of Wireless Random Networks [102.02142856863563]
This paper presents a machine learning strategy that tackles a distributed optimization task in a wireless network with an arbitrary number of randomly interconnected nodes.
We develop a flexible deep neural network formalism termed distributed message-passing neural network (DMPNN) with forward and backward computations independent of the network topology.
arXiv Detail & Related papers (2021-06-15T09:03:28Z) - Decentralized Control with Graph Neural Networks [147.84766857793247]
We propose a novel framework using graph neural networks (GNNs) to learn decentralized controllers.
GNNs are well-suited for the task since they are naturally distributed architectures and exhibit good scalability and transferability properties.
The problems of flocking and multi-agent path planning are explored to illustrate the potential of GNNs in learning decentralized controllers.
arXiv Detail & Related papers (2020-12-29T18:59:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.