Causal Triplet: An Open Challenge for Intervention-centric Causal
Representation Learning
- URL: http://arxiv.org/abs/2301.05169v2
- Date: Mon, 3 Apr 2023 17:19:51 GMT
- Title: Causal Triplet: An Open Challenge for Intervention-centric Causal
Representation Learning
- Authors: Yuejiang Liu, Alexandre Alahi, Chris Russell, Max Horn, Dominik
Zietlow, Bernhard Sch\"olkopf, Francesco Locatello
- Abstract summary: Causal Triplet is a causal representation learning benchmark featuring visually more complex scenes.
We show that models built with the knowledge of disentangled or object-centric representations significantly outperform their distributed counterparts.
- Score: 98.78136504619539
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have seen a surge of interest in learning high-level causal
representations from low-level image pairs under interventions. Yet, existing
efforts are largely limited to simple synthetic settings that are far away from
real-world problems. In this paper, we present Causal Triplet, a causal
representation learning benchmark featuring not only visually more complex
scenes, but also two crucial desiderata commonly overlooked in previous works:
(i) an actionable counterfactual setting, where only certain object-level
variables allow for counterfactual observations whereas others do not; (ii) an
interventional downstream task with an emphasis on out-of-distribution
robustness from the independent causal mechanisms principle. Through extensive
experiments, we find that models built with the knowledge of disentangled or
object-centric representations significantly outperform their distributed
counterparts. However, recent causal representation learning methods still
struggle to identify such latent structures, indicating substantial challenges
and opportunities for future work. Our code and datasets will be available at
https://sites.google.com/view/causaltriplet.
Related papers
- Towards the Reusability and Compositionality of Causal Representations [25.697274665903898]
We introduce DECAF, a framework that detects which causal factors can be reused and which need to be adapted from previously learned causal representations.
Our approach is based on the availability of intervention targets, that indicate which variables are perturbed at each time step.
Experiments show that integrating our framework with four state-of-the-art CRL approaches leads to accurate representations in a new environment with only a few samples.
arXiv Detail & Related papers (2024-03-14T19:36:07Z) - Sim-to-Real Causal Transfer: A Metric Learning Approach to
Causally-Aware Interaction Representations [62.48505112245388]
We take an in-depth look at the causal awareness of modern representations of agent interactions.
We show that recent representations are already partially resilient to perturbations of non-causal agents.
We propose a metric learning approach that regularizes latent representations with causal annotations.
arXiv Detail & Related papers (2023-12-07T18:57:03Z) - Causal Representation Learning Made Identifiable by Grouping of Observational Variables [8.157856010838382]
Causal Representation Learning aims to learn a causal model for hidden features in a data-driven manner.
Here, we show identifiability based on novel, weak constraints.
We also propose a novel self-supervised estimation framework consistent with the model.
arXiv Detail & Related papers (2023-10-24T10:38:02Z) - Towards Causal Foundation Model: on Duality between Causal Inference and Attention [18.046388712804042]
We take a first step towards building causally-aware foundation models for treatment effect estimations.
We propose a novel, theoretically justified method called Causal Inference with Attention (CInA)
arXiv Detail & Related papers (2023-10-01T22:28:34Z) - Endogenous Macrodynamics in Algorithmic Recourse [52.87956177581998]
Existing work on Counterfactual Explanations (CE) and Algorithmic Recourse (AR) has largely focused on single individuals in a static environment.
We show that many of the existing methodologies can be collectively described by a generalized framework.
We then argue that the existing framework does not account for a hidden external cost of recourse, that only reveals itself when studying the endogenous dynamics of recourse at the group level.
arXiv Detail & Related papers (2023-08-16T07:36:58Z) - Towards Robust and Adaptive Motion Forecasting: A Causal Representation
Perspective [72.55093886515824]
We introduce a causal formalism of motion forecasting, which casts the problem as a dynamic process with three groups of latent variables.
We devise a modular architecture that factorizes the representations of invariant mechanisms and style confounders to approximate a causal graph.
Experiment results on synthetic and real datasets show that our three proposed components significantly improve the robustness and reusability of the learned motion representations.
arXiv Detail & Related papers (2021-11-29T18:59:09Z) - ACRE: Abstract Causal REasoning Beyond Covariation [90.99059920286484]
We introduce the Abstract Causal REasoning dataset for systematic evaluation of current vision systems in causal induction.
Motivated by the stream of research on causal discovery in Blicket experiments, we query a visual reasoning system with the following four types of questions in either an independent scenario or an interventional scenario.
We notice that pure neural models tend towards an associative strategy under their chance-level performance, whereas neuro-symbolic combinations struggle in backward-blocking reasoning.
arXiv Detail & Related papers (2021-03-26T02:42:38Z) - Exploring the Limits of Few-Shot Link Prediction in Knowledge Graphs [49.6661602019124]
We study a spectrum of models derived by generalizing the current state of the art for few-shot link prediction.
We find that a simple zero-shot baseline - which ignores any relation-specific information - achieves surprisingly strong performance.
Experiments on carefully crafted synthetic datasets show that having only a few examples of a relation fundamentally limits models from using fine-grained structural information.
arXiv Detail & Related papers (2021-02-05T21:04:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.