On the Hardness of Computing Counterfactual and Semifactual Explanations in XAI
- URL: http://arxiv.org/abs/2601.09455v1
- Date: Wed, 14 Jan 2026 13:02:24 GMT
- Title: On the Hardness of Computing Counterfactual and Semifactual Explanations in XAI
- Authors: André Artelt, Martin Olsen, Kevin Tierney,
- Abstract summary: We show that in many cases, generating explanations is computationally hard.<n>We discuss the implications for the XAI community and for policymakers seeking to regulate explanations in AI.
- Score: 5.172213041663734
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Providing clear explanations to the choices of machine learning models is essential for these models to be deployed in crucial applications. Counterfactual and semi-factual explanations have emerged as two mechanisms for providing users with insights into the outputs of their models. We provide an overview of the computational complexity results in the literature for generating these explanations, finding that in many cases, generating explanations is computationally hard. We strengthen the argument for this considerably by further contributing our own inapproximability results showing that not only are explanations often hard to generate, but under certain assumptions, they are also hard to approximate. We discuss the implications of these complexity results for the XAI community and for policymakers seeking to regulate explanations in AI.
Related papers
- Explaining Decisions in ML Models: a Parameterized Complexity Analysis (Part I) [31.014684803229756]
This paper presents a theoretical investigation into the parameterized complexity of explanation problems in various machine learning (ML) models.<n> Contrary to the prevalent black-box perception, our study focuses on models with transparent internal mechanisms.
arXiv Detail & Related papers (2025-11-05T15:25:07Z) - Natural Language Counterfactual Explanations for Graphs Using Large Language Models [7.560731917128082]
We exploit the power of open-source Large Language Models to generate natural language explanations.<n>We show that our approach effectively produces accurate natural language representations of counterfactual instances.
arXiv Detail & Related papers (2024-10-11T23:06:07Z) - Hard to Explain: On the Computational Hardness of In-Distribution Model Interpretation [0.9558392439655016]
The ability to interpret Machine Learning (ML) models is becoming increasingly essential.
Recent work has demonstrated that it is possible to formally assess interpretability by studying the computational complexity of explaining the decisions of various models.
arXiv Detail & Related papers (2024-08-07T17:20:52Z) - Explaining Decisions in ML Models: a Parameterized Complexity Analysis [26.444020729887782]
This paper presents a theoretical investigation into the parameterized complexity of explanation problems in various machine learning (ML) models.
Contrary to the prevalent black-box perception, our study focuses on models with transparent internal mechanisms.
arXiv Detail & Related papers (2024-07-22T16:37:48Z) - Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - Discrete Reasoning Templates for Natural Language Understanding [79.07883990966077]
We present an approach that reasons about complex questions by decomposing them to simpler subquestions.
We derive the final answer according to instructions in a predefined reasoning template.
We show that our approach is competitive with the state-of-the-art while being interpretable and requires little supervision.
arXiv Detail & Related papers (2021-04-05T18:56:56Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.