Relational Causal Models with Cycles:Representation and Reasoning
- URL: http://arxiv.org/abs/2202.10706v1
- Date: Tue, 22 Feb 2022 07:37:17 GMT
- Title: Relational Causal Models with Cycles:Representation and Reasoning
- Authors: Ragib Ahsan, David Arbour, Elena Zheleva
- Abstract summary: We introduce relational $sigma$-separation, a new criterion for understanding relational systems with feedback loops.
We show the necessary and sufficient conditions for the completeness of $sigma$-AGG and that relational $sigma$-separation is sound and complete in the presence of one or more cycles with arbitrary length.
- Score: 16.10327013845982
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Causal reasoning in relational domains is fundamental to studying real-world
social phenomena in which individual units can influence each other's traits
and behavior. Dynamics between interconnected units can be represented as an
instantiation of a relational causal model; however, causal reasoning over such
instantiation requires additional templating assumptions that capture feedback
loops of influence. Previous research has developed lifted representations to
address the relational nature of such dynamics but has strictly required that
the representation has no cycles. To facilitate cycles in relational
representation and learning, we introduce relational $\sigma$-separation, a new
criterion for understanding relational systems with feedback loops. We also
introduce a new lifted representation, $\sigma$-abstract ground graph which
helps with abstracting statistical independence relations in all possible
instantiations of the cyclic relational model. We show the necessary and
sufficient conditions for the completeness of $\sigma$-AGG and that relational
$\sigma$-separation is sound and complete in the presence of one or more cycles
with arbitrary length. To the best of our knowledge, this is the first work on
representation of and reasoning with cyclic relational causal models.
Related papers
- Sequential Representation Learning via Static-Dynamic Conditional Disentanglement [58.19137637859017]
This paper explores self-supervised disentangled representation learning within sequential data, focusing on separating time-independent and time-varying factors in videos.
We propose a new model that breaks the usual independence assumption between those factors by explicitly accounting for the causal relationship between the static/dynamic variables.
Experiments show that the proposed approach outperforms previous complex state-of-the-art techniques in scenarios where the dynamics of a scene are influenced by its content.
arXiv Detail & Related papers (2024-08-10T17:04:39Z) - Neural Persistence Dynamics [8.197801260302642]
We consider the problem of learning the dynamics in the topology of time-evolving point clouds.
Our proposed model - $textitNeural Persistence Dynamics$ - substantially outperforms the state-of-the-art across a diverse set of parameter regression tasks.
arXiv Detail & Related papers (2024-05-24T17:20:18Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Learning Relational Causal Models with Cycles through Relational
Acyclification [16.10327013845982]
We introduce textitrelational acyclification, an operation specifically designed for relational models.
We show that under the assumptions of relational acyclification and $sigma$-faithfulness, the relational causal discovery algorithm RCD is sound and complete for cyclic models.
arXiv Detail & Related papers (2022-08-25T17:00:42Z) - Sparse Relational Reasoning with Object-Centric Representations [78.83747601814669]
We investigate the composability of soft-rules learned by relational neural architectures when operating over object-centric representations.
We find that increasing sparsity, especially on features, improves the performance of some models and leads to simpler relations.
arXiv Detail & Related papers (2022-07-15T14:57:33Z) - On Neural Architecture Inductive Biases for Relational Tasks [76.18938462270503]
We introduce a simple architecture based on similarity-distribution scores which we name Compositional Network generalization (CoRelNet)
We find that simple architectural choices can outperform existing models in out-of-distribution generalizations.
arXiv Detail & Related papers (2022-06-09T16:24:01Z) - A general framework for cyclic and fine-tuned causal models and their
compatibility with space-time [2.0305676256390934]
Causal modelling is a tool for generating causal explanations of observed correlations.
Existing frameworks for quantum causality tend to focus on acyclic causal structures that are not fine-tuned.
Cyclist causal models can be used to model physical processes involving feedback.
Cyclist causal models may also be relevant in exotic solutions of general relativity.
arXiv Detail & Related papers (2021-09-24T18:00:08Z) - Why Adversarial Interaction Creates Non-Homogeneous Patterns: A
Pseudo-Reaction-Diffusion Model for Turing Instability [10.933825676518195]
We observe Turing-like patterns in a system of neurons with adversarial interaction.
We present a pseudo-reaction-diffusion model to explain the mechanism that may underlie these phenomena.
arXiv Detail & Related papers (2020-10-01T16:09:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.