Recovering the Graph Underlying Networked Dynamical Systems under
Partial Observability: A Deep Learning Approach
- URL: http://arxiv.org/abs/2208.04405v3
- Date: Wed, 12 Apr 2023 22:07:10 GMT
- Title: Recovering the Graph Underlying Networked Dynamical Systems under
Partial Observability: A Deep Learning Approach
- Authors: S\'ergio Machado, Anirudh Sridhar, Paulo Gil, Jorge Henriques, Jos\'e
M. F. Moura, Augusto Santos
- Abstract summary: We study the problem of graph structure identification, i.e., of recovering the graph of dependencies among time series.
We devise a new feature vector computed from the observed time series and prove that these features are linearly separable.
We use these features to train Convolutional Neural Networks (CNNs)
- Score: 7.209528581296429
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of graph structure identification, i.e., of recovering
the graph of dependencies among time series. We model these time series data as
components of the state of linear stochastic networked dynamical systems. We
assume partial observability, where the state evolution of only a subset of
nodes comprising the network is observed. We devise a new feature vector
computed from the observed time series and prove that these features are
linearly separable, i.e., there exists a hyperplane that separates the cluster
of features associated with connected pairs of nodes from those associated with
disconnected pairs. This renders the features amenable to train a variety of
classifiers to perform causal inference. In particular, we use these features
to train Convolutional Neural Networks (CNNs). The resulting causal inference
mechanism outperforms state-of-the-art counterparts w.r.t. sample-complexity.
The trained CNNs generalize well over structurally distinct networks (dense or
sparse) and noise-level profiles. Remarkably, they also generalize well to
real-world networks while trained over a synthetic network (realization of a
random graph). Finally, the proposed method consistently reconstructs the graph
in a pairwise manner, that is, by deciding if an edge or arrow is present or
absent in each pair of nodes, from the corresponding time series of each pair.
This fits the framework of large-scale systems, where observation or processing
of all nodes in the network is prohibitive.
Related papers
- Formal Verification of Graph Convolutional Networks with Uncertain Node Features and Uncertain Graph Structure [7.133681867718039]
Graph neural networks are becoming increasingly popular in the field of machine learning.
They have been applied in safety-critical environments where perturbations inherently occur.
This research addresses the non-passing gap by preserving the dependencies of all elements in the underlying computations.
arXiv Detail & Related papers (2024-04-23T14:12:48Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - Inferring the Graph of Networked Dynamical Systems under Partial
Observability and Spatially Colored Noise [2.362288417229025]
In a Networked Dynamical System (NDS), each node is a system whose dynamics are coupled with the dynamics of neighboring nodes.
The underlying network is unknown in many applications and should be inferred from observed data.
arXiv Detail & Related papers (2023-12-18T16:19:07Z) - MTS2Graph: Interpretable Multivariate Time Series Classification with
Temporal Evolving Graphs [1.1756822700775666]
We introduce a new framework for interpreting time series data by extracting and clustering the input representative patterns.
We run experiments on eight datasets of the UCR/UEA archive, along with HAR and PAM datasets.
arXiv Detail & Related papers (2023-06-06T16:24:27Z) - Temporal Aggregation and Propagation Graph Neural Networks for Dynamic
Representation [67.26422477327179]
Temporal graphs exhibit dynamic interactions between nodes over continuous time.
We propose a novel method of temporal graph convolution with the whole neighborhood.
Our proposed TAP-GNN outperforms existing temporal graph methods by a large margin in terms of both predictive performance and online inference latency.
arXiv Detail & Related papers (2023-04-15T08:17:18Z) - GrannGAN: Graph annotation generative adversarial networks [72.66289932625742]
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton.
The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases.
In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features.
arXiv Detail & Related papers (2022-12-01T11:49:07Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Learning the Evolutionary and Multi-scale Graph Structure for
Multivariate Time Series Forecasting [50.901984244738806]
We show how to model the evolutionary and multi-scale interactions of time series.
In particular, we first provide a hierarchical graph structure cooperated with the dilated convolution to capture the scale-specific correlations.
A unified neural network is provided to integrate the components above to get the final prediction.
arXiv Detail & Related papers (2022-06-28T08:11:12Z) - Slope and generalization properties of neural networks [0.0]
We show that the distribution of the slope of a well-trained neural network classifier is generally independent of the width of the layers in a fully connected network.
The slope is of similar size throughout the relevant volume, and varies smoothly. It also behaves as predicted in rescaling examples.
We discuss possible applications of the slope concept, such as using it as a part of the loss function or stopping criterion during network training, or ranking data sets in terms of their complexity.
arXiv Detail & Related papers (2021-07-03T17:54:27Z) - Graph Neural Networks with Composite Kernels [60.81504431653264]
We re-interpret node aggregation from the perspective of kernel weighting.
We present a framework to consider feature similarity in an aggregation scheme.
We propose feature aggregation as the composition of the original neighbor-based kernel and a learnable kernel to encode feature similarities in a feature space.
arXiv Detail & Related papers (2020-05-16T04:44:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.