On the Complexity of Counterfactual Reasoning
- URL: http://arxiv.org/abs/2211.13447v2
- Date: Fri, 19 May 2023 05:19:11 GMT
- Title: On the Complexity of Counterfactual Reasoning
- Authors: Yunqiu Han, Yizuo Chen, Adnan Darwiche
- Abstract summary: We show that counterfactual reasoning is no harder than associational or interventional reasoning on fully specified SCMs.
We extend our results to general counterfactual reasoning that requires contemplating more than two worlds.
- Score: 9.614694312155795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the computational complexity of counterfactual reasoning in relation
to the complexity of associational and interventional reasoning on structural
causal models (SCMs). We show that counterfactual reasoning is no harder than
associational or interventional reasoning on fully specified SCMs in the
context of two computational frameworks. The first framework is based on the
notion of treewidth and includes the classical variable elimination and
jointree algorithms. The second framework is based on the more recent and
refined notion of causal treewidth which is directed towards models with
functional dependencies such as SCMs. Our results are constructive and based on
bounding the (causal) treewidth of twin networks -- used in standard
counterfactual reasoning that contemplates two worlds, real and imaginary -- to
the (causal) treewidth of the underlying SCM structure. In particular, we show
that the latter (causal) treewidth is no more than twice the former plus one.
Hence, if associational or interventional reasoning is tractable on a fully
specified SCM then counterfactual reasoning is tractable too. We extend our
results to general counterfactual reasoning that requires contemplating more
than two worlds and discuss applications of our results to counterfactual
reasoning with a partially specified SCM that is coupled with data. We finally
present empirical results that measure the gap between the complexities of
counterfactual reasoning and associational/interventional reasoning on random
SCMs.
Related papers
- CreDes: Causal Reasoning Enhancement and Dual-End Searching for Solving Long-Range Reasoning Problems using LLMs [13.977459316171013]
Large language models (LLMs) have demonstrated limitations in handling optimization problems involving long-range reasoning.
This paper introduces the Causal Relationship Enhancement (CRE) mechanism combining cause-effect interventions and the Individual Treatment Effect (ITE) to guarantee the solid causal rightness.
Experiments demonstrate that CreDes significantly outperforms existing State-Of-The-Art (SOTA) solutions in terms of both accuracy and time efficiency.
arXiv Detail & Related papers (2024-10-02T16:05:01Z) - Cause and Effect: Can Large Language Models Truly Understand Causality? [1.2334534968968969]
This research proposes a novel architecture called Context Aware Reasoning Enhancement with Counterfactual Analysis(CARE CA) framework.
The proposed framework incorporates an explicit causal detection module with ConceptNet and counterfactual statements, as well as implicit causal detection through Large Language Models.
The knowledge from ConceptNet enhances the performance of multiple causal reasoning tasks such as causal discovery, causal identification and counterfactual reasoning.
arXiv Detail & Related papers (2024-02-28T08:02:14Z) - Answering Causal Queries at Layer 3 with DiscoSCMs-Embracing
Heterogeneity [0.0]
This paper advocates for the Distribution-consistency Structural Causal Models (DiscoSCM) framework as a pioneering approach to counterfactual inference.
arXiv Detail & Related papers (2023-09-17T17:01:05Z) - When Do Program-of-Thoughts Work for Reasoning? [51.2699797837818]
We propose complexity-impacted reasoning score (CIRS) to measure correlation between code and reasoning abilities.
Specifically, we use the abstract syntax tree to encode the structural information and calculate logical complexity.
Code will be integrated into the EasyInstruct framework at https://github.com/zjunlp/EasyInstruct.
arXiv Detail & Related papers (2023-08-29T17:22:39Z) - A Semantic Approach to Decidability in Epistemic Planning (Extended
Version) [72.77805489645604]
We use a novel semantic approach to achieve decidability.
Specifically, we augment the logic of knowledge S5$_n$ and with an interaction axiom called (knowledge) commutativity.
We prove that our framework admits a finitary non-fixpoint characterization of common knowledge, which is of independent interest.
arXiv Detail & Related papers (2023-07-28T11:26:26Z) - Comparing Causal Frameworks: Potential Outcomes, Structural Models,
Graphs, and Abstractions [10.889531739861562]
The aim of this paper is to make clear and precise the relationship between the Rubin causal model (RCM) and structural causal model (SCM)
A key result then shows that every RCM -- including those that violate algebraic principles implied by the SCM framework -- emerges as an abstraction of some representable RCM.
arXiv Detail & Related papers (2023-06-25T21:57:20Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - Counterfactuals Modulo Temporal Logics [1.90365714903665]
Lewis' theory of counterfactuals is the foundation of many contemporary notions of causality.
We extend this theory in the temporal direction to enable symbolic counterfactual reasoning on infinite sequences.
arXiv Detail & Related papers (2023-06-15T07:40:36Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - From Checking to Inference: Actual Causality Computations as
Optimization Problems [79.87179017975235]
We present a novel approach to formulate different notions of causal reasoning, over binary acyclic models, as optimization problems.
We show that both notions are efficiently automated. Using models with more than $8000$ variables, checking is computed in a matter of seconds, with MaxSAT outperforming ILP in many cases.
arXiv Detail & Related papers (2020-06-05T10:56:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.