Searching for actual causes: Approximate algorithms with adjustable precision
- URL: http://arxiv.org/abs/2507.07857v1
- Date: Thu, 10 Jul 2025 15:39:36 GMT
- Title: Searching for actual causes: Approximate algorithms with adjustable precision
- Authors: Samuel Reyd, Ada Diaconescu, Jean-Louis Dessalles,
- Abstract summary: Causality has gained popularity in recent years. It has helped improve the performance, reliability, and interpretability of machine learning models.<n>Recent literature on explainable artificial intelligence (XAI) has faced criticism.<n>We propose a set of algorithms to identify actual causes with a complexity and an adjustable level of precision and exhaustiveness.
- Score: 1.0323063834827417
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Causality has gained popularity in recent years. It has helped improve the performance, reliability, and interpretability of machine learning models. However, recent literature on explainable artificial intelligence (XAI) has faced criticism. The classical XAI and causality literature focuses on understanding which factors contribute to which consequences. While such knowledge is valuable for researchers and engineers, it is not what non-expert users expect as explanations. Instead, these users often await facts that cause the target consequences, i.e., actual causes. Formalizing this notion is still an open problem. Additionally, identifying actual causes is reportedly an NP-complete problem, and there are too few practical solutions to approximate formal definitions. We propose a set of algorithms to identify actual causes with a polynomial complexity and an adjustable level of precision and exhaustiveness. Our experiments indicate that the algorithms (1) identify causes for different categories of systems that are not handled by existing approaches (i.e., non-boolean, black-box, and stochastic systems), (2) can be adjusted to gain more precision and exhaustiveness with more computation time.
Related papers
- When can you trust feature selection? -- I: A condition-based analysis
of LASSO and generalised hardness of approximation [49.1574468325115]
We show how no (randomised) algorithm can determine the correct support sets (with probability $> 1/2$) of minimisers of LASSO when reading approximate input.
For ill-posed inputs, the algorithm runs forever, hence, it will never produce a wrong answer.
For any algorithm defined on an open set containing a point with infinite condition number, there is an input for which the algorithm will either run forever or produce a wrong answer.
arXiv Detail & Related papers (2023-12-18T18:29:01Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Stop ordering machine learning algorithms by their explainability! A
user-centered investigation of performance and explainability [0.0]
We study tradeoff between model performance and explainability of machine learning algorithms.
We find that the tradeoff is much less gradual in the end user's perception.
Results of our second experiment show that while explainable artificial intelligence augmentations can be used to increase explainability, the type of explanation plays an essential role in end user perception.
arXiv Detail & Related papers (2022-06-20T08:32:38Z) - Refining neural network predictions using background knowledge [68.35246878394702]
We show we can use logical background knowledge in learning system to compensate for a lack of labeled training data.
We introduce differentiable refinement functions that find a corrected prediction close to the original prediction.
This algorithm finds optimal refinements on complex SAT formulas in significantly fewer iterations and frequently finds solutions where gradient descent can not.
arXiv Detail & Related papers (2022-06-10T10:17:59Z) - GRACE-C: Generalized Rate Agnostic Causal Estimation via Constraints [3.2374399328078285]
Graphical structures estimated by causal learning algorithms from time series data can provide misleading causal information if the causal timescale of the generating process fails to match the measurement timescale of the data.
Existing algorithms provide limited resources to respond to this challenge, and so researchers must either use models that they know are likely misleading, or else forego causal learning entirely.
Existing methods face up-to-four distinct shortfalls, as they might 1) require that the difference between causal and measurement is known; 2) only handle very small number of random variables when the timescale difference is unknown; 3) only apply to pairs of variables; or 4) be unable to
arXiv Detail & Related papers (2022-05-18T22:38:57Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - Learning Generalized Causal Structure in Time-series [0.0]
We develop a machine learning pipeline based on a recently proposed 'neurochaos' feature learning technique (ChaosFEX feature extractor)
In this work we develop a machine learning pipeline based on a recently proposed 'neurochaos' feature learning technique (ChaosFEX feature extractor)
arXiv Detail & Related papers (2021-12-06T14:48:13Z) - Counterfactuals and Causability in Explainable Artificial Intelligence:
Theory, Algorithms, and Applications [0.20999222360659603]
Some researchers argued that for a machine to achieve a certain degree of human-level explainability, it needs to provide causally understandable explanations.
A specific class of algorithms that have the potential to provide causability are counterfactuals.
This paper presents an in-depth systematic review of the diverse existing body of literature on counterfactuals and causability for explainable artificial intelligence.
arXiv Detail & Related papers (2021-03-07T03:11:39Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Demystifying Deep Neural Networks Through Interpretation: A Survey [3.566184392528658]
Modern deep learning algorithms tend to optimize an objective metric, such as minimize a cross entropy loss on a training dataset, to be able to learn.
The problem is that the single metric is an incomplete description of the real world tasks.
There are works done to tackle the problem of interpretability to provide insights into neural networks behavior and thought process.
arXiv Detail & Related papers (2020-12-13T17:56:41Z) - From Checking to Inference: Actual Causality Computations as
Optimization Problems [79.87179017975235]
We present a novel approach to formulate different notions of causal reasoning, over binary acyclic models, as optimization problems.
We show that both notions are efficiently automated. Using models with more than $8000$ variables, checking is computed in a matter of seconds, with MaxSAT outperforming ILP in many cases.
arXiv Detail & Related papers (2020-06-05T10:56:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.