An Interpretable Rule Creation Method for Black-Box Models based on Surrogate Trees -- SRules
- URL: http://arxiv.org/abs/2407.20070v1
- Date: Mon, 29 Jul 2024 14:56:56 GMT
- Title: An Interpretable Rule Creation Method for Black-Box Models based on Surrogate Trees -- SRules
- Authors: Mario Parrón Verdasco, Esteban García-Cuesta,
- Abstract summary: We present a new ruleset creation method based on surrogate decision trees (SRules)
SRules balances the accuracy, coverage, and interpretability of machine learning models.
Our approach not only provides interpretable rules, but also quantifies the confidence and coverage of these rules.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As artificial intelligence (AI) systems become increasingly integrated into critical decision-making processes, the need for transparent and interpretable models has become paramount. In this article we present a new ruleset creation method based on surrogate decision trees (SRules), designed to improve the interpretability of black-box machine learning models. SRules balances the accuracy, coverage, and interpretability of machine learning models by recursively creating surrogate interpretable decision tree models that approximate the decision boundaries of a complex model. We propose a systematic framework for generating concise and meaningful rules from these surrogate models, allowing stakeholders to understand and trust the AI system's decision-making process. Our approach not only provides interpretable rules, but also quantifies the confidence and coverage of these rules. The proposed model allows to adjust its parameters to counteract the lack of interpretability by precision and coverage by allowing a near perfect fit and high interpretability of some parts of the model . The results show that SRules improves on other state-of-the-art techniques and introduces the possibility of creating highly interpretable specific rules for specific sub-parts of the model.
Related papers
- Subgroup Analysis via Model-based Rule Forest [0.0]
Model-based Deep Rule Forests (mobDRF) is an interpretable representation learning algorithm designed to extract transparent models from data.
We apply mobDRF to identify key risk factors for cognitive decline in an elderly population, demonstrating its effectiveness in subgroup analysis and local model optimization.
arXiv Detail & Related papers (2024-08-27T13:40:15Z) - Towards consistency of rule-based explainer and black box model -- fusion of rule induction and XAI-based feature importance [0.0]
Rule-based models offer a human-understandable representation, i.e. they are interpretable.
The generation of such explanations involves the approximation of a black box model by a rule-based model.
It has not been investigated whether the rule-based model makes decisions in the same way as the black box model it approximates.
arXiv Detail & Related papers (2024-07-16T07:56:29Z) - SynthTree: Co-supervised Local Model Synthesis for Explainable Prediction [15.832975722301011]
We propose a novel method to enhance explainability with minimal accuracy loss.
We have developed novel methods for estimating nodes by leveraging AI techniques.
Our findings highlight the critical role that statistical methodologies can play in advancing explainable AI.
arXiv Detail & Related papers (2024-06-16T14:43:01Z) - Selecting Interpretability Techniques for Healthcare Machine Learning models [69.65384453064829]
In healthcare there is a pursuit for employing interpretable algorithms to assist healthcare professionals in several decision scenarios.
We overview a selection of eight algorithms, both post-hoc and model-based, that can be used for such purposes.
arXiv Detail & Related papers (2024-06-14T17:49:04Z) - Modeling Boundedly Rational Agents with Latent Inference Budgets [56.24971011281947]
We introduce a latent inference budget model (L-IBM) that models agents' computational constraints explicitly.
L-IBMs make it possible to learn agent models using data from diverse populations of suboptimal actors.
We show that L-IBMs match or outperform Boltzmann models of decision-making under uncertainty.
arXiv Detail & Related papers (2023-12-07T03:55:51Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Deep Grey-Box Modeling With Adaptive Data-Driven Models Toward
Trustworthy Estimation of Theory-Driven Models [88.63781315038824]
We present a framework that enables us to analyze a regularizer's behavior empirically with a slight change in the neural net's architecture and the training objective.
arXiv Detail & Related papers (2022-10-24T10:42:26Z) - GAM(e) changer or not? An evaluation of interpretable machine learning
models based on additive model constraints [5.783415024516947]
This paper investigates a series of intrinsically interpretable machine learning models.
We evaluate the prediction qualities of five GAMs as compared to six traditional ML models.
arXiv Detail & Related papers (2022-04-19T20:37:31Z) - Recurrence-Aware Long-Term Cognitive Network for Explainable Pattern
Classification [0.0]
We propose an LTCN-based model for interpretable pattern classification of structured data.
Our method brings its own mechanism for providing explanations by quantifying the relevance of each feature in the decision process.
Our interpretable model obtains competitive performance when compared to the state-of-the-art white and black boxes.
arXiv Detail & Related papers (2021-07-07T18:14:50Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.