GAM(e) changer or not? An evaluation of interpretable machine learning
models based on additive model constraints
- URL: http://arxiv.org/abs/2204.09123v1
- Date: Tue, 19 Apr 2022 20:37:31 GMT
- Title: GAM(e) changer or not? An evaluation of interpretable machine learning
models based on additive model constraints
- Authors: Patrick Zschech, Sven Weinzierl, Nico Hambauer, Sandra Zilker, Mathias
Kraus
- Abstract summary: This paper investigates a series of intrinsically interpretable machine learning models.
We evaluate the prediction qualities of five GAMs as compared to six traditional ML models.
- Score: 5.783415024516947
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The number of information systems (IS) studies dealing with explainable
artificial intelligence (XAI) is currently exploding as the field demands more
transparency about the internal decision logic of machine learning (ML) models.
However, most techniques subsumed under XAI provide post-hoc-analytical
explanations, which have to be considered with caution as they only use
approximations of the underlying ML model. Therefore, our paper investigates a
series of intrinsically interpretable ML models and discusses their suitability
for the IS community. More specifically, our focus is on advanced extensions of
generalized additive models (GAM) in which predictors are modeled independently
in a non-linear way to generate shape functions that can capture arbitrary
patterns but remain fully interpretable. In our study, we evaluate the
prediction qualities of five GAMs as compared to six traditional ML models and
assess their visual outputs for model interpretability. On this basis, we
investigate their merits and limitations and derive design implications for
further improvements.
Related papers
- Challenging the Performance-Interpretability Trade-off: An Evaluation of Interpretable Machine Learning Models [3.3595341706248876]
Generalized additive models (GAMs) offer promising properties for capturing complex, non-linear patterns while remaining fully interpretable.
This study examines the predictive performance of seven different GAMs in comparison to seven commonly used machine learning models based on a collection of twenty benchmark datasets.
arXiv Detail & Related papers (2024-09-22T12:58:52Z) - Leveraging Model-based Trees as Interpretable Surrogate Models for Model
Distillation [3.5437916561263694]
Surrogate models play a crucial role in retrospectively interpreting complex and powerful black box machine learning models.
This paper focuses on using model-based trees as surrogate models which partition the feature space into interpretable regions via decision rules.
Four model-based tree algorithms, namely SLIM, GUIDE, MOB, and CTree, are compared regarding their ability to generate such surrogate models.
arXiv Detail & Related papers (2023-10-04T19:06:52Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Scaling Vision-Language Models with Sparse Mixture of Experts [128.0882767889029]
We show that mixture-of-experts (MoE) techniques can achieve state-of-the-art performance on a range of benchmarks over dense models of equivalent computational cost.
Our research offers valuable insights into stabilizing the training of MoE models, understanding the impact of MoE on model interpretability, and balancing the trade-offs between compute performance when scaling vision-language models.
arXiv Detail & Related papers (2023-03-13T16:00:31Z) - AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models [1.8752655643513647]
XAI tools can increase the vulnerability of model extraction attacks, which is a concern when model owners prefer black-box access.
We propose a novel retraining (learning) based model extraction attack framework against interpretable models under black-box settings.
We show that AUTOLYCUS is highly effective, requiring significantly fewer queries compared to state-of-the-art attacks.
arXiv Detail & Related papers (2023-02-04T13:23:39Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z) - Interpretable Learning-to-Rank with Generalized Additive Models [78.42800966500374]
Interpretability of learning-to-rank models is a crucial yet relatively under-examined research area.
Recent progress on interpretable ranking models largely focuses on generating post-hoc explanations for existing black-box ranking models.
We lay the groundwork for intrinsically interpretable learning-to-rank by introducing generalized additive models (GAMs) into ranking tasks.
arXiv Detail & Related papers (2020-05-06T01:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.