Explaining Decisions in ML Models: a Parameterized Complexity Analysis (Part I)
- URL: http://arxiv.org/abs/2511.03545v1
- Date: Wed, 05 Nov 2025 15:25:07 GMT
- Title: Explaining Decisions in ML Models: a Parameterized Complexity Analysis (Part I)
- Authors: Sebastian Ordyniak, Giacomo Paesani, Mateusz Rychlicki, Stefan Szeider,
- Abstract summary: This paper presents a theoretical investigation into the parameterized complexity of explanation problems in various machine learning (ML) models.<n> Contrary to the prevalent black-box perception, our study focuses on models with transparent internal mechanisms.
- Score: 31.014684803229756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a comprehensive theoretical investigation into the parameterized complexity of explanation problems in various machine learning (ML) models. Contrary to the prevalent black-box perception, our study focuses on models with transparent internal mechanisms. We address two principal types of explanation problems: abductive and contrastive, both in their local and global variants. Our analysis encompasses diverse ML models, including Decision Trees, Decision Sets, Decision Lists, Boolean Circuits, and ensembles thereof, each offering unique explanatory challenges. This research fills a significant gap in explainable AI (XAI) by providing a foundational understanding of the complexities of generating explanations for these models. This work provides insights vital for further research in the domain of XAI, contributing to the broader discourse on the necessity of transparency and accountability in AI systems.
Related papers
- On the Hardness of Computing Counterfactual and Semifactual Explanations in XAI [5.172213041663734]
We show that in many cases, generating explanations is computationally hard.<n>We discuss the implications for the XAI community and for policymakers seeking to regulate explanations in AI.
arXiv Detail & Related papers (2026-01-14T13:02:24Z) - LLMs for Explainable AI: A Comprehensive Survey [0.7373617024876725]
Large Language Models (LLMs) offer a promising approach to enhancing Explainable AI (XAI)<n>LLMs transform complex machine learning outputs into easy-to-understand narratives.<n>LLMs can bridge the gap between sophisticated model behavior and human interpretability.
arXiv Detail & Related papers (2025-03-31T18:19:41Z) - Explainable artificial intelligence (XAI): from inherent explainability to large language models [0.0]
Explainable AI (XAI) techniques facilitate the explainability or interpretability of machine learning models.<n>This paper details the advancements of explainable AI methods, from inherently interpretable models to modern approaches.<n>We review explainable AI techniques that leverage vision-language model (VLM) frameworks to automate or improve the explainability of other machine learning models.
arXiv Detail & Related papers (2025-01-17T06:16:57Z) - Large Multi-modal Models Can Interpret Features in Large Multi-modal Models [51.485491249693155]
We first apply a Sparse Autoencoder to disentangle the representations into human understandable features.<n>We then present an automatic interpretation framework to interpreted the open-semantic features learned in SAE by the LMMs themselves.
arXiv Detail & Related papers (2024-11-22T14:41:36Z) - Explaining Decisions in ML Models: a Parameterized Complexity Analysis [26.444020729887782]
This paper presents a theoretical investigation into the parameterized complexity of explanation problems in various machine learning (ML) models.
Contrary to the prevalent black-box perception, our study focuses on models with transparent internal mechanisms.
arXiv Detail & Related papers (2024-07-22T16:37:48Z) - A Critical Assessment of Interpretable and Explainable Machine Learning for Intrusion Detection [0.0]
We study the use of overly complex and opaque ML models, unaccounted data imbalances and correlated features, inconsistent influential features across different explanation methods, and the implausible utility of explanations.
Specifically, we advise avoiding complex opaque models such as Deep Neural Networks and instead using interpretable ML models such as Decision Trees.
We find that feature-based model explanations are most often inconsistent across different settings.
arXiv Detail & Related papers (2024-07-04T15:35:42Z) - Understanding the Language Model to Solve the Symbolic Multi-Step Reasoning Problem from the Perspective of Buffer Mechanism [68.05754701230039]
We construct a symbolic multi-step reasoning task to investigate the information propagation mechanisms in Transformer models.<n>We propose a random matrix-based algorithm to enhance the model's reasoning ability.
arXiv Detail & Related papers (2024-05-24T07:41:26Z) - Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Multilingual Multi-Aspect Explainability Analyses on Machine Reading Comprehension Models [76.48370548802464]
This paper focuses on conducting a series of analytical experiments to examine the relations between the multi-head self-attention and the final MRC system performance.
We discover that passage-to-question and passage understanding attentions are the most important ones in the question answering process.
Through comprehensive visualizations and case studies, we also observe several general findings on the attention maps, which can be helpful to understand how these models solve the questions.
arXiv Detail & Related papers (2021-08-26T04:23:57Z) - Explanatory Pluralism in Explainable AI [0.0]
I chart a taxonomy of types of explanation and the associated XAI methods that can address them.
When we look to expose the inner mechanisms of AI models, we produce Diagnostic-explanations.
When we wish to form stable generalizations of our models, we produce Expectation-explanations.
Finally, when we want to justify the usage of a model, we produce Role-explanations.
arXiv Detail & Related papers (2021-06-26T09:02:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.