Relational Causal Models with Cycles:Representation and Reasoning
- URL: http://arxiv.org/abs/2202.10706v1
- Date: Tue, 22 Feb 2022 07:37:17 GMT
- Title: Relational Causal Models with Cycles:Representation and Reasoning
- Authors: Ragib Ahsan, David Arbour, Elena Zheleva
- Abstract summary: We introduce relational $sigma$-separation, a new criterion for understanding relational systems with feedback loops.
We show the necessary and sufficient conditions for the completeness of $sigma$-AGG and that relational $sigma$-separation is sound and complete in the presence of one or more cycles with arbitrary length.
- Score: 16.10327013845982
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Causal reasoning in relational domains is fundamental to studying real-world
social phenomena in which individual units can influence each other's traits
and behavior. Dynamics between interconnected units can be represented as an
instantiation of a relational causal model; however, causal reasoning over such
instantiation requires additional templating assumptions that capture feedback
loops of influence. Previous research has developed lifted representations to
address the relational nature of such dynamics but has strictly required that
the representation has no cycles. To facilitate cycles in relational
representation and learning, we introduce relational $\sigma$-separation, a new
criterion for understanding relational systems with feedback loops. We also
introduce a new lifted representation, $\sigma$-abstract ground graph which
helps with abstracting statistical independence relations in all possible
instantiations of the cyclic relational model. We show the necessary and
sufficient conditions for the completeness of $\sigma$-AGG and that relational
$\sigma$-separation is sound and complete in the presence of one or more cycles
with arbitrary length. To the best of our knowledge, this is the first work on
representation of and reasoning with cyclic relational causal models.
Related papers
- Neural Persistence Dynamics [8.197801260302642]
We consider the problem of learning the dynamics in the topology of time-evolving point clouds.
Our proposed model -- staticitneural persistence dynamics -- substantially outperforms the state-of-the-art across a diverse set of parameter regression tasks.
arXiv Detail & Related papers (2024-05-24T17:20:18Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [85.67870425656368]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Inducing Causal Structure for Abstractive Text Summarization [76.1000380429553]
We introduce a Structural Causal Model (SCM) to induce the underlying causal structure of the summarization data.
We propose a Causality Inspired Sequence-to-Sequence model (CI-Seq2Seq) to learn the causal representations that can mimic the causal factors.
Experimental results on two widely used text summarization datasets demonstrate the advantages of our approach.
arXiv Detail & Related papers (2023-08-24T16:06:36Z) - Learning Relational Causal Models with Cycles through Relational
Acyclification [16.10327013845982]
We introduce textitrelational acyclification, an operation specifically designed for relational models.
We show that under the assumptions of relational acyclification and $sigma$-faithfulness, the relational causal discovery algorithm RCD is sound and complete for cyclic models.
arXiv Detail & Related papers (2022-08-25T17:00:42Z) - Sparse Relational Reasoning with Object-Centric Representations [78.83747601814669]
We investigate the composability of soft-rules learned by relational neural architectures when operating over object-centric representations.
We find that increasing sparsity, especially on features, improves the performance of some models and leads to simpler relations.
arXiv Detail & Related papers (2022-07-15T14:57:33Z) - On Neural Architecture Inductive Biases for Relational Tasks [76.18938462270503]
We introduce a simple architecture based on similarity-distribution scores which we name Compositional Network generalization (CoRelNet)
We find that simple architectural choices can outperform existing models in out-of-distribution generalizations.
arXiv Detail & Related papers (2022-06-09T16:24:01Z) - Towards Robust and Adaptive Motion Forecasting: A Causal Representation
Perspective [72.55093886515824]
We introduce a causal formalism of motion forecasting, which casts the problem as a dynamic process with three groups of latent variables.
We devise a modular architecture that factorizes the representations of invariant mechanisms and style confounders to approximate a causal graph.
Experiment results on synthetic and real datasets show that our three proposed components significantly improve the robustness and reusability of the learned motion representations.
arXiv Detail & Related papers (2021-11-29T18:59:09Z) - A general framework for cyclic and fine-tuned causal models and their
compatibility with space-time [2.0305676256390934]
Causal modelling is a tool for generating causal explanations of observed correlations.
Existing frameworks for quantum causality tend to focus on acyclic causal structures that are not fine-tuned.
Cyclist causal models can be used to model physical processes involving feedback.
Cyclist causal models may also be relevant in exotic solutions of general relativity.
arXiv Detail & Related papers (2021-09-24T18:00:08Z) - Why Adversarial Interaction Creates Non-Homogeneous Patterns: A
Pseudo-Reaction-Diffusion Model for Turing Instability [10.933825676518195]
We observe Turing-like patterns in a system of neurons with adversarial interaction.
We present a pseudo-reaction-diffusion model to explain the mechanism that may underlie these phenomena.
arXiv Detail & Related papers (2020-10-01T16:09:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.