Pedagogical Rule Extraction for Learning Interpretable Models
- URL: http://arxiv.org/abs/2112.13285v1
- Date: Sat, 25 Dec 2021 20:54:53 GMT
- Title: Pedagogical Rule Extraction for Learning Interpretable Models
- Authors: Vadim Arzamasov, Benjamin Jochum, Klemens B\"ohm
- Abstract summary: We propose a framework dubbed PRELIM to learn better rules from small data.
It augments data using statistical models and employs it to learn a rulebased model.
In our experiments, we identified PRELIM configurations that outperform state-of-the-art.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Machine-learning models are ubiquitous. In some domains, for instance, in
medicine, the models' predictions must be interpretable. Decision trees,
classification rules, and subgroup discovery are three broad categories of
supervised machine-learning models presenting knowledge in the form of
interpretable rules. The accuracy of these models learned from small datasets
is usually low. Obtaining larger datasets is often hard to impossible. We
propose a framework dubbed PRELIM to learn better rules from small data. It
augments data using statistical models and employs it to learn a rulebased
model. In our extensive experiments, we identified PRELIM configurations that
outperform state-of-the-art.
Related papers
- Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - Variation of Gender Biases in Visual Recognition Models Before and After
Finetuning [29.55318393877906]
We introduce a framework to measure how biases change before and after fine-tuning a large scale visual recognition model for a downstream task.
We find that supervised models trained on datasets such as ImageNet-21k are more likely to retain their pretraining biases.
We also find that models finetuned on larger scale datasets are more likely to introduce new biased associations.
arXiv Detail & Related papers (2023-03-14T03:42:47Z) - Deep Explainable Learning with Graph Based Data Assessing and Rule
Reasoning [4.369058206183195]
We propose an end-to-end deep explainable learning approach that combines the advantage of deep model in noise handling and expert rule-based interpretability.
The proposed method is tested in an industry production system, showing comparable prediction accuracy, much higher generalization stability and better interpretability.
arXiv Detail & Related papers (2022-11-09T05:58:56Z) - Synthetic Model Combination: An Instance-wise Approach to Unsupervised
Ensemble Learning [92.89846887298852]
Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data.
Give access to a set of expert models and their predictions alongside some limited information about the dataset used to train them.
arXiv Detail & Related papers (2022-10-11T10:20:31Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Distilling Interpretable Models into Human-Readable Code [71.11328360614479]
Human-readability is an important and desirable standard for machine-learned model interpretability.
We propose to train interpretable models using conventional methods, and then distill them into concise, human-readable code.
We describe a piecewise-linear curve-fitting algorithm that produces high-quality results efficiently and reliably across a broad range of use cases.
arXiv Detail & Related papers (2021-01-21T01:46:36Z) - Field-wise Learning for Multi-field Categorical Data [27.100048708707593]
We propose a new method for learning with multi-field categorical data.
In doing this, the models can be fitted to each category and thus can better capture the underlying differences in data.
The experiment results on two large-scale datasets show the superior performance of our model.
arXiv Detail & Related papers (2020-12-01T01:10:14Z) - Deducing neighborhoods of classes from a fitted model [68.8204255655161]
In this article a new kind of interpretable machine learning method is presented.
It can help to understand the partitioning of the feature space into predicted classes in a classification model using quantile shifts.
Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed.
arXiv Detail & Related papers (2020-09-11T16:35:53Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.