ACRE: Abstract Causal REasoning Beyond Covariation
- URL: http://arxiv.org/abs/2103.14232v1
- Date: Fri, 26 Mar 2021 02:42:38 GMT
- Title: ACRE: Abstract Causal REasoning Beyond Covariation
- Authors: Chi Zhang, Baoxiong Jia, Mark Edmonds, Song-Chun Zhu, Yixin Zhu
- Abstract summary: We introduce the Abstract Causal REasoning dataset for systematic evaluation of current vision systems in causal induction.
Motivated by the stream of research on causal discovery in Blicket experiments, we query a visual reasoning system with the following four types of questions in either an independent scenario or an interventional scenario.
We notice that pure neural models tend towards an associative strategy under their chance-level performance, whereas neuro-symbolic combinations struggle in backward-blocking reasoning.
- Score: 90.99059920286484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal induction, i.e., identifying unobservable mechanisms that lead to the
observable relations among variables, has played a pivotal role in modern
scientific discovery, especially in scenarios with only sparse and limited
data. Humans, even young toddlers, can induce causal relationships surprisingly
well in various settings despite its notorious difficulty. However, in contrast
to the commonplace trait of human cognition is the lack of a diagnostic
benchmark to measure causal induction for modern Artificial Intelligence (AI)
systems. Therefore, in this work, we introduce the Abstract Causal REasoning
(ACRE) dataset for systematic evaluation of current vision systems in causal
induction. Motivated by the stream of research on causal discovery in Blicket
experiments, we query a visual reasoning system with the following four types
of questions in either an independent scenario or an interventional scenario:
direct, indirect, screening-off, and backward-blocking, intentionally going
beyond the simple strategy of inducing causal relationships by covariation. By
analyzing visual reasoning architectures on this testbed, we notice that pure
neural models tend towards an associative strategy under their chance-level
performance, whereas neuro-symbolic combinations struggle in backward-blocking
reasoning. These deficiencies call for future research in models with a more
comprehensive capability of causal induction.
Related papers
- Missed Causes and Ambiguous Effects: Counterfactuals Pose Challenges for Interpreting Neural Networks [14.407025310553225]
Interpretability research takes counterfactual theories of causality for granted.
Counterfactual theories have problems that bias our findings in specific and predictable ways.
We discuss the implications of these challenges for interpretability researchers.
arXiv Detail & Related papers (2024-07-05T17:53:03Z) - Emergence and Causality in Complex Systems: A Survey on Causal Emergence
and Related Quantitative Studies [12.78006421209864]
Causal emergence theory employs measures of causality to quantify emergence.
Two key problems are addressed: quantifying causal emergence and identifying it in data.
We highlighted that the architectures used for identifying causal emergence are shared by causal representation learning, causal model abstraction, and world model-based reinforcement learning.
arXiv Detail & Related papers (2023-12-28T04:20:46Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Nonlinearity, Feedback and Uniform Consistency in Causal Structural
Learning [0.8158530638728501]
Causal Discovery aims to find automated search methods for learning causal structures from observational data.
This thesis focuses on two questions in causal discovery: (i) providing an alternative definition of k-Triangle Faithfulness that (i) is weaker than strong faithfulness when applied to the Gaussian family of distributions, and (ii) under the assumption that the modified version of Strong Faithfulness holds.
arXiv Detail & Related papers (2023-08-15T01:23:42Z) - Causal reasoning in typical computer vision tasks [11.95181390654463]
Causal theory models the intrinsic causal structure unaffected by data bias and is effective in avoiding spurious correlations.
This paper aims to comprehensively review the existing causal methods in typical vision and vision-language tasks such as semantic segmentation, object detection, and image captioning.
Future roadmaps are also proposed, including facilitating the development of causal theory and its application in other complex scenes and systems.
arXiv Detail & Related papers (2023-07-26T07:01:57Z) - Causal Triplet: An Open Challenge for Intervention-centric Causal
Representation Learning [98.78136504619539]
Causal Triplet is a causal representation learning benchmark featuring visually more complex scenes.
We show that models built with the knowledge of disentangled or object-centric representations significantly outperform their distributed counterparts.
arXiv Detail & Related papers (2023-01-12T17:43:38Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.