Explainable Evidential Clustering
- URL: http://arxiv.org/abs/2507.12192v2
- Date: Wed, 06 Aug 2025 22:24:18 GMT
- Title: Explainable Evidential Clustering
- Authors: Victor F. Lopes de Souza, Karima Bakhti, Sofiane Ramdani, Denis Mottet, Abdelhak Imoussaten,
- Abstract summary: Real-world data often contain imperfections, characterized by uncertainty and imprecision, which are not well handled by traditional methods.<n>Evidential clustering, based on Dempster-Shafer theory, addresses these challenges.<n>We propose the Iterative Evidential Mistake Minimization (IEMM) algorithm, which provides interpretable and cautious decision tree explanations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised classification is a fundamental machine learning problem. Real-world data often contain imperfections, characterized by uncertainty and imprecision, which are not well handled by traditional methods. Evidential clustering, based on Dempster-Shafer theory, addresses these challenges. This paper explores the underexplored problem of explaining evidential clustering results, which is crucial for high-stakes domains such as healthcare. Our analysis shows that, in the general case, representativity is a necessary and sufficient condition for decision trees to serve as abductive explainers. Building on the concept of representativity, we generalize this idea to accommodate partial labeling through utility functions. These functions enable the representation of "tolerable" mistakes, leading to the definition of evidential mistakeness as explanation cost and the construction of explainers tailored to evidential classifiers. Finally, we propose the Iterative Evidential Mistake Minimization (IEMM) algorithm, which provides interpretable and cautious decision tree explanations for evidential clustering functions. We validate the proposed algorithm on synthetic and real-world data. Taking into account the decision-maker's preferences, we were able to provide an explanation that was satisfactory up to 93% of the time.
Related papers
- I Predict Therefore I Am: Is Next Token Prediction Enough to Learn Human-Interpretable Concepts from Data? [76.15163242945813]
Large language models (LLMs) have led many to conclude that they exhibit a form of intelligence.<n>We introduce a novel generative model that generates tokens on the basis of human-interpretable concepts represented as latent discrete variables.
arXiv Detail & Related papers (2025-03-12T01:21:17Z) - On the Complexity of Global Necessary Reasons to Explain Classification [10.007636884318801]
Explainable AI has garnered considerable attention in recent years, as understanding the reasons behind decisions or predictions made by AI systems is crucial for their successful adoption.<n>In this paper, we focus on global explanations, and explain classification in terms of minimal'' necessary conditions for the classifier to assign a specific class to a generic instance.
arXiv Detail & Related papers (2025-01-12T10:25:14Z) - When Can You Trust Your Explanations? A Robustness Analysis on Feature Importances [42.36530107262305]
robustness of explanations plays a central role in ensuring trust in both the system and the provided explanation.<n>We propose a novel approach to analyse the robustness of neural network explanations to non-adversarial perturbations.<n>We additionally present an ensemble method to aggregate various explanations, showing how merging explanations can be beneficial for both understanding the model's decision and evaluating the robustness.
arXiv Detail & Related papers (2024-06-20T14:17:57Z) - GLANCE: Global Actions in a Nutshell for Counterfactual Explainability [10.25011737760687]
We introduce GLANCE, a versatile and adaptive framework, comprising two algorithms.
C-GLANCE employs a clustering approach that considers both the feature space and the space of counterfactual actions.
T-GLANCE provides additional features to enhance flexibility.
arXiv Detail & Related papers (2024-05-29T09:24:25Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Even-if Explanations: Formal Foundations, Priorities and Complexity [18.126159829450028]
We show that both linear and tree-based models are strictly more interpretable than neural networks.
We introduce a preference-based framework that enables users to personalize explanations based on their preferences.
arXiv Detail & Related papers (2024-01-17T11:38:58Z) - Understanding and Mitigating Classification Errors Through Interpretable
Token Patterns [58.91023283103762]
Characterizing errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors.
We propose to discover those patterns of tokens that distinguish correct and erroneous predictions.
We show that our method, Premise, performs well in practice.
arXiv Detail & Related papers (2023-11-18T00:24:26Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - From Robustness to Explainability and Back Again [3.7950144463212134]
This paper addresses the poor scalability of formal explainability and proposes novel efficient algorithms for computing formal explanations.<n>The proposed algorithm computes explanations by answering instead a number of queries, and such robustness that the number of such queries is at most linear on the number of features.
arXiv Detail & Related papers (2023-06-05T17:21:05Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Nested Counterfactual Identification from Arbitrary Surrogate
Experiments [95.48089725859298]
We study the identification of nested counterfactuals from an arbitrary combination of observations and experiments.
Specifically, we prove the counterfactual unnesting theorem (CUT), which allows one to map arbitrary nested counterfactuals to unnested ones.
arXiv Detail & Related papers (2021-07-07T12:51:04Z) - Interpretable Representations in Explainable AI: From Theory to Practice [7.031336702345381]
Interpretable representations are the backbone of many explainers that target black-box predictive systems.
We study properties of interpretable representations that encode presence and absence of human-comprehensible concepts.
arXiv Detail & Related papers (2020-08-16T21:44:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.