Relational Causal Models with Cycles:Representation and Reasoning
- URL: http://arxiv.org/abs/2202.10706v1
- Date: Tue, 22 Feb 2022 07:37:17 GMT
- Title: Relational Causal Models with Cycles:Representation and Reasoning
- Authors: Ragib Ahsan, David Arbour, Elena Zheleva
- Abstract summary: We introduce relational $sigma$-separation, a new criterion for understanding relational systems with feedback loops.
We show the necessary and sufficient conditions for the completeness of $sigma$-AGG and that relational $sigma$-separation is sound and complete in the presence of one or more cycles with arbitrary length.
- Score: 16.10327013845982
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Causal reasoning in relational domains is fundamental to studying real-world
social phenomena in which individual units can influence each other's traits
and behavior. Dynamics between interconnected units can be represented as an
instantiation of a relational causal model; however, causal reasoning over such
instantiation requires additional templating assumptions that capture feedback
loops of influence. Previous research has developed lifted representations to
address the relational nature of such dynamics but has strictly required that
the representation has no cycles. To facilitate cycles in relational
representation and learning, we introduce relational $\sigma$-separation, a new
criterion for understanding relational systems with feedback loops. We also
introduce a new lifted representation, $\sigma$-abstract ground graph which
helps with abstracting statistical independence relations in all possible
instantiations of the cyclic relational model. We show the necessary and
sufficient conditions for the completeness of $\sigma$-AGG and that relational
$\sigma$-separation is sound and complete in the presence of one or more cycles
with arbitrary length. To the best of our knowledge, this is the first work on
representation of and reasoning with cyclic relational causal models.
Related papers
- Cyclic quantum causal modelling with a graph separation theorem [0.0]
We introduce a robust probability rule and a novel graph-separation property, p-separation, which we prove to be sound and complete for all such models.
Our approach maps cyclic causal models to acyclic ones with post-selection, leveraging the post-selected quantum teleportation protocol.
arXiv Detail & Related papers (2025-02-06T15:51:15Z) - Systematic Abductive Reasoning via Diverse Relation Representations in Vector-symbolic Architecture [10.27696004820717]
We propose a Systematic Abductive Reasoning model with diverse relation representations (Rel-SAR) in Vector-symbolic Architecture (VSA)
To derive representations with symbolic reasoning potential, we introduce not only various types of atomic vectors represent numeric, periodic and logical semantics, but also the structured high-dimentional representation (S)
For systematic reasoning, we propose novel numerical and logical functions and perform rule abduction and generalization execution in a unified framework that integrates these relation representations.
arXiv Detail & Related papers (2025-01-21T05:17:08Z) - Temporal Causal Reasoning with (Non-Recursive) Structural Equation Models [9.112107794815671]
We propose a new interpretation of Structural Equation Models (SEMs) when reasoning about Actual Causality.
This allows us to combine counterfactual causal reasoning with existing temporal logic formalisms.
We show that the standard restriction to so-called textitrecursive models is not necessary in our approach.
arXiv Detail & Related papers (2025-01-17T13:37:58Z) - Sequential Representation Learning via Static-Dynamic Conditional Disentanglement [58.19137637859017]
This paper explores self-supervised disentangled representation learning within sequential data, focusing on separating time-independent and time-varying factors in videos.
We propose a new model that breaks the usual independence assumption between those factors by explicitly accounting for the causal relationship between the static/dynamic variables.
Experiments show that the proposed approach outperforms previous complex state-of-the-art techniques in scenarios where the dynamics of a scene are influenced by its content.
arXiv Detail & Related papers (2024-08-10T17:04:39Z) - Neural Persistence Dynamics [8.197801260302642]
We consider the problem of learning the dynamics in the topology of time-evolving point clouds.
Our proposed model - $textitNeural Persistence Dynamics$ - substantially outperforms the state-of-the-art across a diverse set of parameter regression tasks.
arXiv Detail & Related papers (2024-05-24T17:20:18Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Learning Relational Causal Models with Cycles through Relational
Acyclification [16.10327013845982]
We introduce textitrelational acyclification, an operation specifically designed for relational models.
We show that under the assumptions of relational acyclification and $sigma$-faithfulness, the relational causal discovery algorithm RCD is sound and complete for cyclic models.
arXiv Detail & Related papers (2022-08-25T17:00:42Z) - Sparse Relational Reasoning with Object-Centric Representations [78.83747601814669]
We investigate the composability of soft-rules learned by relational neural architectures when operating over object-centric representations.
We find that increasing sparsity, especially on features, improves the performance of some models and leads to simpler relations.
arXiv Detail & Related papers (2022-07-15T14:57:33Z) - On Neural Architecture Inductive Biases for Relational Tasks [76.18938462270503]
We introduce a simple architecture based on similarity-distribution scores which we name Compositional Network generalization (CoRelNet)
We find that simple architectural choices can outperform existing models in out-of-distribution generalizations.
arXiv Detail & Related papers (2022-06-09T16:24:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.