Learning the Causal Structure of Networked Dynamical Systems under
Latent Nodes and Structured Noise
- URL: http://arxiv.org/abs/2312.05974v3
- Date: Mon, 12 Feb 2024 14:17:39 GMT
- Title: Learning the Causal Structure of Networked Dynamical Systems under
Latent Nodes and Structured Noise
- Authors: Augusto Santos, Diogo Rente, Rui Seabra and Jos\'e M. F. Moura
- Abstract summary: This paper considers learning the hidden causal network of a linear networked dynamical system (NDS) from the time series data at some of its nodes.
The dynamics of the NDS are driven by colored noise that generates spurious associations across pairs of nodes, rendering the problem much harder.
To address the challenge of noise correlation and partial observability, we assign to each pair of nodes a feature vector computed from the time series data of observed nodes.
- Score: 2.362288417229025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper considers learning the hidden causal network of a linear networked
dynamical system (NDS) from the time series data at some of its nodes --
partial observability. The dynamics of the NDS are driven by colored noise that
generates spurious associations across pairs of nodes, rendering the problem
much harder. To address the challenge of noise correlation and partial
observability, we assign to each pair of nodes a feature vector computed from
the time series data of observed nodes. The feature embedding is engineered to
yield structural consistency: there exists an affine hyperplane that
consistently partitions the set of features, separating the feature vectors
corresponding to connected pairs of nodes from those corresponding to
disconnected pairs. The causal inference problem is thus addressed via
clustering the designed features. We demonstrate with simple baseline
supervised methods the competitive performance of the proposed causal inference
mechanism under broad connectivity regimes and noise correlation levels,
including a real world network. Further, we devise novel technical guarantees
of structural consistency for linear NDS under the considered regime.
Related papers
- AdaRC: Mitigating Graph Structure Shifts during Test-Time [66.40525136929398]
Test-time adaptation (TTA) has attracted attention due to its ability to adapt a pre-trained model to a target domain without re-accessing the source domain.
We propose AdaRC, an innovative framework designed for effective and efficient adaptation to structure shifts in graphs.
arXiv Detail & Related papers (2024-10-09T15:15:40Z) - Inferring the Graph of Networked Dynamical Systems under Partial
Observability and Spatially Colored Noise [2.362288417229025]
In a Networked Dynamical System (NDS), each node is a system whose dynamics are coupled with the dynamics of neighboring nodes.
The underlying network is unknown in many applications and should be inferred from observed data.
arXiv Detail & Related papers (2023-12-18T16:19:07Z) - Enhancing the Performance of Neural Networks Through Causal Discovery
and Integration of Domain Knowledge [30.666463571510242]
We develop a methodology to encode hierarchical causality structure among observed variables into a neural network in order to improve its predictive performance.
The proposed methodology, called causality-informed neural network (CINN), leverages three coherent steps to map the structural causal knowledge into the layer-to-layer design of neural network.
arXiv Detail & Related papers (2023-11-29T01:25:00Z) - DANI: Fast Diffusion Aware Network Inference with Preserving Topological
Structure Property [2.8948274245812327]
We propose a novel method called DANI to infer the underlying network while preserving its structural properties.
DANI has higher accuracy and lower run time while maintaining structural properties, including modular structure, degree distribution, connected components, density, and clustering coefficients.
arXiv Detail & Related papers (2023-10-02T23:23:00Z) - Temporal Aggregation and Propagation Graph Neural Networks for Dynamic
Representation [67.26422477327179]
Temporal graphs exhibit dynamic interactions between nodes over continuous time.
We propose a novel method of temporal graph convolution with the whole neighborhood.
Our proposed TAP-GNN outperforms existing temporal graph methods by a large margin in terms of both predictive performance and online inference latency.
arXiv Detail & Related papers (2023-04-15T08:17:18Z) - Recovering the Graph Underlying Networked Dynamical Systems under
Partial Observability: A Deep Learning Approach [7.209528581296429]
We study the problem of graph structure identification, i.e., of recovering the graph of dependencies among time series.
We devise a new feature vector computed from the observed time series and prove that these features are linearly separable.
We use these features to train Convolutional Neural Networks (CNNs)
arXiv Detail & Related papers (2022-08-08T20:32:28Z) - Robust Knowledge Adaptation for Dynamic Graph Neural Networks [61.8505228728726]
We propose Ada-DyGNN: a robust knowledge Adaptation framework via reinforcement learning for Dynamic Graph Neural Networks.
Our approach constitutes the first attempt to explore robust knowledge adaptation via reinforcement learning.
Experiments on three benchmark datasets demonstrate that Ada-DyGNN achieves the state-of-the-art performance.
arXiv Detail & Related papers (2022-07-22T02:06:53Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Convolutional Dynamic Alignment Networks for Interpretable
Classifications [108.83345790813445]
We introduce a new family of neural network models called Convolutional Dynamic Alignment Networks (CoDA-Nets)
Their core building blocks are Dynamic Alignment Units (DAUs), which linearly transform their input with weight vectors that dynamically align with task-relevant patterns.
CoDA-Nets model the classification prediction through a series of input-dependent linear transformations, allowing for linear decomposition of the output into individual input contributions.
arXiv Detail & Related papers (2021-03-31T18:03:53Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z) - Feature-Attention Graph Convolutional Networks for Noise Resilient
Learning [20.059242373860013]
We propose FA-GCN, a feature-attention graph convolution learning framework, to handle networks with noisy and sparse node content.
Experiments and validations, w.r.t. different noise levels, demonstrate that FA-GCN achieves better performance than state-of-the-art methods on both noise-free and noisy networks.
arXiv Detail & Related papers (2019-12-26T02:51:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.