From Checking to Inference: Actual Causality Computations as
Optimization Problems
- URL: http://arxiv.org/abs/2006.03363v2
- Date: Mon, 6 Jul 2020 11:28:01 GMT
- Title: From Checking to Inference: Actual Causality Computations as
Optimization Problems
- Authors: Amjad Ibrahim and Alexander Pretschner
- Abstract summary: We present a novel approach to formulate different notions of causal reasoning, over binary acyclic models, as optimization problems.
We show that both notions are efficiently automated. Using models with more than $8000$ variables, checking is computed in a matter of seconds, with MaxSAT outperforming ILP in many cases.
- Score: 79.87179017975235
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Actual causality is increasingly well understood. Recent formal approaches,
proposed by Halpern and Pearl, have made this concept mature enough to be
amenable to automated reasoning. Actual causality is especially vital for
building accountable, explainable systems. Among other reasons, causality
reasoning is computationally hard due to the requirements of counterfactuality
and the minimality of causes. Previous approaches presented either inefficient
or restricted, and domain-specific, solutions to the problem of automating
causality reasoning. In this paper, we present a novel approach to formulate
different notions of causal reasoning, over binary acyclic models, as
optimization problems, based on quantifiable notions within counterfactual
computations. We contribute and compare two compact, non-trivial, and sound
integer linear programming (ILP) and Maximum Satisfiability (MaxSAT) encodings
to check causality. Given a candidate cause, both approaches identify what a
minimal cause is. Also, we present an ILP encoding to infer causality without
requiring a candidate cause. We show that both notions are efficiently
automated. Using models with more than $8000$ variables, checking is computed
in a matter of seconds, with MaxSAT outperforming ILP in many cases. In
contrast, inference is computed in a matter of minutes.
Related papers
- O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning [98.3430004984531]
We propose Length-Harmonizing Fine-Tuning (O1-Pruner) to minimize reasoning overhead while maintaining accuracy.
Our code is coming soon at https://github.com/StarDewXXX/O1-Pruner.
arXiv Detail & Related papers (2025-01-22T01:35:11Z) - Do NOT Think That Much for 2+3=? On the Overthinking of o1-Like LLMs [76.43407125275202]
o1-like models can emulate human-like long-time thinking during inference.
This paper presents the first comprehensive study on the prevalent issue of overthinking in these models.
We propose strategies to mitigate overthinking, streamlining reasoning processes without compromising accuracy.
arXiv Detail & Related papers (2024-12-30T18:55:12Z) - Simple and Provable Scaling Laws for the Test-Time Compute of Large Language Models [70.07661254213181]
We propose two principled algorithms for the test-time compute of large language models.
We prove theoretically that the failure probability of one algorithm decays to zero exponentially as its test-time compute grows.
arXiv Detail & Related papers (2024-11-29T05:29:47Z) - CreDes: Causal Reasoning Enhancement and Dual-End Searching for Solving Long-Range Reasoning Problems using LLMs [13.977459316171013]
Large language models (LLMs) have demonstrated limitations in handling optimization problems involving long-range reasoning.
This paper introduces the Causal Relationship Enhancement (CRE) mechanism combining cause-effect interventions and the Individual Treatment Effect (ITE) to guarantee the solid causal rightness.
Experiments demonstrate that CreDes significantly outperforms existing State-Of-The-Art (SOTA) solutions in terms of both accuracy and time efficiency.
arXiv Detail & Related papers (2024-10-02T16:05:01Z) - Causal Discovery and Prediction: Methods and Algorithms [0.0]
In this thesis we introduce a generic a-priori assessment of each possible intervention.
We propose an active learning algorithm that identifies the causal relations in any given causal model.
arXiv Detail & Related papers (2023-09-18T01:19:37Z) - A Framework for Data-Driven Explainability in Mathematical Optimization [0.0]
provably optimal solutions may not be accepted due to the perception of optimization software as a black box.
We advocate for introducing the explainability of a solution as another evaluation criterion, next to its objective value.
It turns out the cost of enforcing explainability can be very small.
arXiv Detail & Related papers (2023-08-16T12:12:24Z) - Chaining Simultaneous Thoughts for Numerical Reasoning [92.2007997126144]
numerical reasoning over text should be an essential skill of AI systems.
Previous work focused on modeling the structures of equations, and has proposed various structured decoders.
We propose CANTOR, a numerical reasoner that models reasoning steps using a directed acyclic graph.
arXiv Detail & Related papers (2022-11-29T18:52:06Z) - Efficient Knowledge Compilation Beyond Weighted Model Counting [7.828647825246474]
We introduce Second Level Algebraic Model Counting (2AMC) as a generic framework for these kinds of problems.
First level techniques based on Knowledge Compilation (KC) have been adapted for specific 2AMC instances by imposing variable order constraints.
We show that we can exploit the logical structure of a 2AMC problem to omit parts of these constraints, thus limiting the negative effect.
arXiv Detail & Related papers (2022-05-16T08:10:40Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.