TNPAR: Topological Neural Poisson Auto-Regressive Model for Learning
Granger Causal Structure from Event Sequences
- URL: http://arxiv.org/abs/2306.14114v2
- Date: Tue, 12 Mar 2024 12:39:03 GMT
- Title: TNPAR: Topological Neural Poisson Auto-Regressive Model for Learning
Granger Causal Structure from Event Sequences
- Authors: Yuequn Liu, Ruichu Cai, Wei Chen, Jie Qiao, Yuguang Yan, Zijian Li,
Keli Zhang, Zhifeng Hao
- Abstract summary: Learning Granger causality from event sequences is a challenging but essential task across various applications.
We devise a unified topological neural Poisson auto-regressive model with two processes.
Experiments on simulated and real-world data demonstrate the effectiveness of our approach.
- Score: 27.289511320823895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning Granger causality from event sequences is a challenging but
essential task across various applications. Most existing methods rely on the
assumption that event sequences are independent and identically distributed
(i.i.d.). However, this i.i.d. assumption is often violated due to the inherent
dependencies among the event sequences. Fortunately, in practice, we find these
dependencies can be modeled by a topological network, suggesting a potential
solution to the non-i.i.d. problem by introducing the prior topological network
into Granger causal discovery. This observation prompts us to tackle two
ensuing challenges: 1) how to model the event sequences while incorporating
both the prior topological network and the latent Granger causal structure, and
2) how to learn the Granger causal structure. To this end, we devise a unified
topological neural Poisson auto-regressive model with two processes. In the
generation process, we employ a variant of the neural Poisson process to model
the event sequences, considering influences from both the topological network
and the Granger causal structure. In the inference process, we formulate an
amortized inference algorithm to infer the latent Granger causal structure. We
encapsulate these two processes within a unified likelihood function, providing
an end-to-end framework for this task. Experiments on simulated and real-world
data demonstrate the effectiveness of our approach.
Related papers
- Learning Granger Causality from Instance-wise Self-attentive Hawkes
Processes [24.956802640469554]
Instance-wise Self-Attentive Hawkes Processes (ISAHP) is a novel deep learning framework that can directly infer the Granger causality at the instance level.
ISAHP is capable of discovering complex instance-level causal structures that cannot be handled by classical models.
arXiv Detail & Related papers (2024-02-06T05:46:51Z) - DynGFN: Towards Bayesian Inference of Gene Regulatory Networks with
GFlowNets [81.75973217676986]
Gene regulatory networks (GRN) describe interactions between genes and their products that control gene expression and cellular function.
Existing methods either focus on challenge (1), identifying cyclic structure from dynamics, or on challenge (2) learning complex Bayesian posteriors over DAGs, but not both.
In this paper we leverage the fact that it is possible to estimate the "velocity" of gene expression with RNA velocity techniques to develop an approach that addresses both challenges.
arXiv Detail & Related papers (2023-02-08T16:36:40Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - Deep Recurrent Modelling of Granger Causality with Latent Confounding [0.0]
We propose a deep learning-based approach to model non-linear Granger causality by directly accounting for latent confounders.
We demonstrate the model performance on non-linear time series for which the latent confounder influences the cause and effect with different time lags.
arXiv Detail & Related papers (2022-02-23T03:26:22Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - THP: Topological Hawkes Processes for Learning Granger Causality on
Event Sequences [31.895008425796792]
We propose a Granger causality learning method on Topological Hawkes processes (THP) in a likelihood framework.
The proposed method is featured with the graph convolution-based likelihood function of THP and a sparse optimization scheme with an Expectation-Maximization of the likelihood function.
arXiv Detail & Related papers (2021-05-23T08:33:46Z) - Hawkes Processes on Graphons [85.6759041284472]
We study Hawkes processes and their variants that are associated with Granger causality graphs.
We can generate the corresponding Hawkes processes and simulate event sequences.
We learn the proposed model by minimizing the hierarchical optimal transport distance between the generated event sequences and the observed ones.
arXiv Detail & Related papers (2021-02-04T17:09:50Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - CAUSE: Learning Granger Causality from Event Sequences using Attribution
Methods [25.04848774593105]
We study the problem of learning Granger causality between event types from asynchronous, interdependent, multi-type event sequences.
We propose CAUSE (Causality from AttribUtions on Sequence of Events), a novel framework for the studied task.
We demonstrate that CAUSE achieves superior performance on correctly inferring the inter-type Granger causality over a range of state-of-the-art methods.
arXiv Detail & Related papers (2020-02-18T22:21:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.