Causal Temporal Graph Convolutional Neural Networks (CTGCN)
- URL: http://arxiv.org/abs/2303.09634v1
- Date: Thu, 16 Mar 2023 20:28:36 GMT
- Title: Causal Temporal Graph Convolutional Neural Networks (CTGCN)
- Authors: Abigail Langbridge, Fearghal O'Donncha, Amadou Ba, Fabio Lorenzi,
Christopher Lohse, Joern Ploennigs
- Abstract summary: We propose a Causal Temporal Graph Convolutional Neural Network (CTGCN)
Our architecture is based on a causal discovery mechanism, and is capable of discovering the underlying causal processes.
We show that the integration of causality into the TGCN architecture improves prediction performance up to 40% over typical TGCN approach.
- Score: 0.44040106718326594
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Many large-scale applications can be elegantly represented using graph
structures. Their scalability, however, is often limited by the domain
knowledge required to apply them. To address this problem, we propose a novel
Causal Temporal Graph Convolutional Neural Network (CTGCN). Our CTGCN
architecture is based on a causal discovery mechanism, and is capable of
discovering the underlying causal processes. The major advantages of our
approach stem from its ability to overcome computational scalability problems
with a divide and conquer technique, and from the greater explainability of
predictions made using a causal model. We evaluate the scalability of our CTGCN
on two datasets to demonstrate that our method is applicable to large scale
problems, and show that the integration of causality into the TGCN architecture
improves prediction performance up to 40% over typical TGCN approach. Our
results are obtained without requiring additional domain knowledge, making our
approach adaptable to various domains, specifically when little contextual
knowledge is available.
Related papers
- DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - CUTS+: High-dimensional Causal Discovery from Irregular Time-series [13.84185941100574]
We propose CUTS+, which is built on the Granger-causality-based causal discovery method CUTS.
We show that CUTS+ largely improves the causal discovery performance on high-dimensional data with different types of irregular sampling.
arXiv Detail & Related papers (2023-05-10T04:20:36Z) - Learning to Solve Combinatorial Graph Partitioning Problems via
Efficient Exploration [72.15369769265398]
Experimentally, ECORD achieves a new SOTA for RL algorithms on the Maximum Cut problem.
Compared to the nearest competitor, ECORD reduces the optimality gap by up to 73%.
arXiv Detail & Related papers (2022-05-27T17:13:10Z) - Generative Adversarial Method Based on Neural Tangent Kernels [13.664682865991255]
We propose a new generative algorithm called generative adversarial NTK (GA-NTK)
GA-NTK can generate images comparable to those by GANs but is much easier to train under various conditions.
We conduct extensive experiments on real-world datasets, and the results show that GA-NTK can generate images comparable to those by GANs but is much easier to train under various conditions.
arXiv Detail & Related papers (2022-04-08T14:17:46Z) - On Recoverability of Graph Neural Network Representations [9.02766568914452]
We propose the notion of recoverability, which is tightly related to information aggregation in GNNs.
We demonstrate, through experimental results on various datasets and different GNN architectures, that estimated recoverability correlates with aggregation method expressivity and graph sparsification quality.
We believe that the proposed method could provide an essential tool for understanding the roots of the aforementioned problems, and potentially lead to a GNN design that overcomes them.
arXiv Detail & Related papers (2022-01-30T15:22:29Z) - Multi-scale Graph Convolutional Networks with Self-Attention [2.66512000865131]
Graph convolutional networks (GCNs) have achieved remarkable learning ability for dealing with various graph structural data.
Over-smoothing phenomenon as a crucial issue of GCNs remains to be solved and investigated.
We propose two novel multi-scale GCN frameworks by incorporating self-attention mechanism and multi-scale information into the design of GCNs.
arXiv Detail & Related papers (2021-12-04T04:41:24Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - AM-GCN: Adaptive Multi-channel Graph Convolutional Networks [85.0332394224503]
We study whether Graph Convolutional Networks (GCNs) can optimally integrate node features and topological structures in a complex graph with rich information.
We propose an adaptive multi-channel graph convolutional networks for semi-supervised classification (AM-GCN)
Our experiments show that AM-GCN extracts the most correlated information from both node features and topological structures substantially.
arXiv Detail & Related papers (2020-07-05T08:16:03Z) - Simple and Deep Graph Convolutional Networks [63.76221532439285]
Graph convolutional networks (GCNs) are a powerful deep learning approach for graph-structured data.
Despite their success, most of the current GCN models are shallow, due to the em over-smoothing problem.
We propose the GCNII, an extension of the vanilla GCN model with two simple yet effective techniques.
arXiv Detail & Related papers (2020-07-04T16:18:06Z) - Towards an Efficient and General Framework of Robust Training for Graph
Neural Networks [96.93500886136532]
Graph Neural Networks (GNNs) have made significant advances on several fundamental inference tasks.
Despite GNNs' impressive performance, it has been observed that carefully crafted perturbations on graph structures lead them to make wrong predictions.
We propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs.
arXiv Detail & Related papers (2020-02-25T15:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.