BISCUIT: Causal Representation Learning from Binary Interactions
- URL: http://arxiv.org/abs/2306.09643v1
- Date: Fri, 16 Jun 2023 06:10:55 GMT
- Title: BISCUIT: Causal Representation Learning from Binary Interactions
- Authors: Phillip Lippe, Sara Magliacane, Sindy L\"owe, Yuki M. Asano, Taco
Cohen, Efstratios Gavves
- Abstract summary: BISCUIT is a method for simultaneously learning causal variables and their corresponding binary interaction variables.
On three robotic-inspired datasets, BISCUIT accurately identifies causal variables and can even be scaled to complex, realistic environments for embodied AI.
- Score: 36.358968799947924
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Identifying the causal variables of an environment and how to intervene on
them is of core value in applications such as robotics and embodied AI. While
an agent can commonly interact with the environment and may implicitly perturb
the behavior of some of these causal variables, often the targets it affects
remain unknown. In this paper, we show that causal variables can still be
identified for many common setups, e.g., additive Gaussian noise models, if the
agent's interactions with a causal variable can be described by an unknown
binary variable. This happens when each causal variable has two different
mechanisms, e.g., an observational and an interventional one. Using this
identifiability result, we propose BISCUIT, a method for simultaneously
learning causal variables and their corresponding binary interaction variables.
On three robotic-inspired datasets, BISCUIT accurately identifies causal
variables and can even be scaled to complex, realistic environments for
embodied AI.
Related papers
- Predicting perturbation targets with causal differential networks [23.568795598997376]
We use an amortized causal discovery model to infer causal graphs from the observational and interventional datasets.
We learn to map these paired graphs to the sets of variables that were intervened upon, in a supervised learning framework.
This approach consistently outperforms baselines for perturbation modeling on seven single-cell transcriptomics datasets.
arXiv Detail & Related papers (2024-10-04T12:48:21Z) - Targeted Cause Discovery with Data-Driven Learning [66.86881771339145]
We propose a novel machine learning approach for inferring causal variables of a target variable from observations.
We employ a neural network trained to identify causality through supervised learning on simulated data.
Empirical results demonstrate the effectiveness of our method in identifying causal relationships within large-scale gene regulatory networks.
arXiv Detail & Related papers (2024-08-29T02:21:11Z) - Causal Inference with Latent Variables: Recent Advances and Future Prospectives [43.04559575298597]
Causal inference (CI) aims to infer intrinsic causal relations among variables of interest.
The lack of observation of important variables severely compromises the reliability of CI methods.
Various consequences can be incurred if these latent variables are carelessly handled.
arXiv Detail & Related papers (2024-06-20T03:15:53Z) - Deep Learning-based Group Causal Inference in Multivariate Time-series [8.073449277052495]
Causal inference in a nonlinear system of multivariate timeseries is instrumental in disentangling the intricate web of relationships among variables.
In this work, we test model invariance by group-level interventions on the trained deep networks to infer causal direction in groups of variables.
arXiv Detail & Related papers (2024-01-16T14:19:28Z) - iSCAN: Identifying Causal Mechanism Shifts among Nonlinear Additive
Noise Models [48.33685559041322]
This paper focuses on identifying the causal mechanism shifts in two or more related datasets over the same set of variables.
Code implementing the proposed method is open-source and publicly available at https://github.com/kevinsbello/iSCAN.
arXiv Detail & Related papers (2023-06-30T01:48:11Z) - A Causal Framework for Decomposing Spurious Variations [68.12191782657437]
We develop tools for decomposing spurious variations in Markovian and Semi-Markovian models.
We prove the first results that allow a non-parametric decomposition of spurious effects.
The described approach has several applications, ranging from explainable and fair AI to questions in epidemiology and medicine.
arXiv Detail & Related papers (2023-06-08T09:40:28Z) - Differentiable Invariant Causal Discovery [106.87950048845308]
Learning causal structure from observational data is a fundamental challenge in machine learning.
This paper proposes Differentiable Invariant Causal Discovery (DICD) to avoid learning spurious edges and wrong causal directions.
Extensive experiments on synthetic and real-world datasets verify that DICD outperforms state-of-the-art causal discovery methods up to 36% in SHD.
arXiv Detail & Related papers (2022-05-31T09:29:07Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z) - Learning Latent Causal Structures with a Redundant Input Neural Network [9.044150926401574]
It is known that inputs cause outputs, and these causal relationships are encoded by a causal network among a set of latent variables.
We develop a deep learning model, which we call a redundant input neural network (RINN), with a modified architecture and a regularized objective function.
A series of simulation experiments provide support that the RINN method can successfully recover latent causal structure between input and output variables.
arXiv Detail & Related papers (2020-03-29T20:52:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.