Metafeatures-based Rule-Extraction for Classifiers on Behavioral and
Textual Data
- URL: http://arxiv.org/abs/2003.04792v3
- Date: Thu, 4 Mar 2021 12:15:48 GMT
- Title: Metafeatures-based Rule-Extraction for Classifiers on Behavioral and
Textual Data
- Authors: Yanou Ramon, David Martens, Theodoros Evgeniou, Stiene Praet
- Abstract summary: Rule-extraction techniques have been proposed to combine the desired predictive accuracy of complex "black-box" models with global explainability.
We develop and test a rule-extraction methodology based on higher-level, less-sparse metafeatures.
A key finding of our analysis is that metafeatures-based explanations are better at mimicking the behavior of the black-box prediction model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning models on behavioral and textual data can result in highly
accurate prediction models, but are often very difficult to interpret.
Rule-extraction techniques have been proposed to combine the desired predictive
accuracy of complex "black-box" models with global explainability. However,
rule-extraction in the context of high-dimensional, sparse data, where many
features are relevant to the predictions, can be challenging, as replacing the
black-box model by many rules leaves the user again with an incomprehensible
explanation. To address this problem, we develop and test a rule-extraction
methodology based on higher-level, less-sparse metafeatures. A key finding of
our analysis is that metafeatures-based explanations are better at mimicking
the behavior of the black-box prediction model, as measured by the fidelity of
explanations.
Related papers
- Interpretability in Symbolic Regression: a benchmark of Explanatory Methods using the Feynman data set [0.0]
Interpretability of machine learning models plays a role as important as the model accuracy.
This paper proposes a benchmark scheme to evaluate explanatory methods to explain regression models.
Results have shown that Symbolic Regression models can be an interesting alternative to white-box and black-box models.
arXiv Detail & Related papers (2024-04-08T23:46:59Z) - Supervised Feature Compression based on Counterfactual Analysis [3.2458225810390284]
This work aims to leverage Counterfactual Explanations to detect the important decision boundaries of a pre-trained black-box model.
Using the discretized dataset, an optimal Decision Tree can be trained that resembles the black-box model, but that is interpretable and compact.
arXiv Detail & Related papers (2022-11-17T21:16:14Z) - Deep Explainable Learning with Graph Based Data Assessing and Rule
Reasoning [4.369058206183195]
We propose an end-to-end deep explainable learning approach that combines the advantage of deep model in noise handling and expert rule-based interpretability.
The proposed method is tested in an industry production system, showing comparable prediction accuracy, much higher generalization stability and better interpretability.
arXiv Detail & Related papers (2022-11-09T05:58:56Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Distilling Interpretable Models into Human-Readable Code [71.11328360614479]
Human-readability is an important and desirable standard for machine-learned model interpretability.
We propose to train interpretable models using conventional methods, and then distill them into concise, human-readable code.
We describe a piecewise-linear curve-fitting algorithm that produces high-quality results efficiently and reliably across a broad range of use cases.
arXiv Detail & Related papers (2021-01-21T01:46:36Z) - Deducing neighborhoods of classes from a fitted model [68.8204255655161]
In this article a new kind of interpretable machine learning method is presented.
It can help to understand the partitioning of the feature space into predicted classes in a classification model using quantile shifts.
Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed.
arXiv Detail & Related papers (2020-09-11T16:35:53Z) - Model extraction from counterfactual explanations [68.8204255655161]
We show how an adversary can leverage the information provided by counterfactual explanations to build high-fidelity and high-accuracy model extraction attacks.
Our attack enables the adversary to build a faithful copy of a target model by accessing its counterfactual explanations.
arXiv Detail & Related papers (2020-09-03T19:02:55Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z) - An interpretable neural network model through piecewise linear
approximation [7.196650216279683]
We propose a hybrid interpretable model that combines a piecewise linear component and a nonlinear component.
The first component describes the explicit feature contributions by piecewise linear approximation to increase the expressiveness of the model.
The other component uses a multi-layer perceptron to capture feature interactions and implicit nonlinearity, and increase the prediction performance.
arXiv Detail & Related papers (2020-01-20T14:32:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.