Pyreal: A Framework for Interpretable ML Explanations
- URL: http://arxiv.org/abs/2312.13084v1
- Date: Wed, 20 Dec 2023 15:04:52 GMT
- Title: Pyreal: A Framework for Interpretable ML Explanations
- Authors: Alexandra Zytek, Wei-En Wang, Dongyu Liu, Laure Berti-Equille, Kalyan
Veeramachaneni
- Abstract summary: Pyreal is a system for generating a variety of interpretable machine learning explanations.
Pyreal converts data and explanations between the feature spaces expected by the model, relevant explanation algorithms, and human users.
Our studies demonstrate that Pyreal generates more useful explanations than existing systems.
- Score: 51.14710806705126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Users in many domains use machine learning (ML) predictions to help them make
decisions. Effective ML-based decision-making often requires explanations of ML
models and their predictions. While there are many algorithms that explain
models, generating explanations in a format that is comprehensible and useful
to decision-makers is a nontrivial task that can require extensive development
overhead. We developed Pyreal, a highly extensible system with a corresponding
Python implementation for generating a variety of interpretable ML
explanations. Pyreal converts data and explanations between the feature spaces
expected by the model, relevant explanation algorithms, and human users,
allowing users to generate interpretable explanations in a low-code manner. Our
studies demonstrate that Pyreal generates more useful explanations than
existing systems while remaining both easy-to-use and efficient.
Related papers
- Distance-Restricted Explanations: Theoretical Underpinnings & Efficient Implementation [19.22391463965126]
Some uses of machine learning (ML) involve high-stakes and safety-critical applications.
This paper investigates novel algorithms for scaling up the performance of logic-based explainers.
arXiv Detail & Related papers (2024-05-14T03:42:33Z) - LLMs for XAI: Future Directions for Explaining Explanations [50.87311607612179]
We focus on refining explanations computed using existing XAI algorithms.
Initial experiments and user study suggest that LLMs offer a promising way to enhance the interpretability and usability of XAI.
arXiv Detail & Related papers (2024-05-09T19:17:47Z) - Evaluating and Explaining Large Language Models for Code Using Syntactic
Structures [74.93762031957883]
This paper introduces ASTxplainer, an explainability method specific to Large Language Models for code.
At its core, ASTxplainer provides an automated method for aligning token predictions with AST nodes.
We perform an empirical evaluation on 12 popular LLMs for code using a curated dataset of the most popular GitHub projects.
arXiv Detail & Related papers (2023-08-07T18:50:57Z) - Interpretability at Scale: Identifying Causal Mechanisms in Alpaca [62.65877150123775]
We use Boundless DAS to efficiently search for interpretable causal structure in large language models while they follow instructions.
Our findings mark a first step toward faithfully understanding the inner-workings of our ever-growing and most widely deployed language models.
arXiv Detail & Related papers (2023-05-15T17:15:40Z) - Logic-Based Explainability in Machine Learning [0.0]
The operation of the most successful Machine Learning models is incomprehensible for human decision makers.
In recent years, there have been efforts on devising approaches for explaining ML models.
This paper overviews the ongoing research efforts on computing rigorous model-based explanations of ML models.
arXiv Detail & Related papers (2022-10-24T13:43:07Z) - Local Interpretable Model Agnostic Shap Explanations for machine
learning models [0.0]
We propose a methodology that we define as Local Interpretable Model Agnostic Shap Explanations (LIMASE)
This proposed technique uses Shapley values under the LIME paradigm to achieve the following (a) explain prediction of any model by using a locally faithful and interpretable decision tree model on which the Tree Explainer is used to calculate the shapley values and give visually interpretable explanations.
arXiv Detail & Related papers (2022-10-10T10:07:27Z) - OmniXAI: A Library for Explainable AI [98.07381528393245]
We introduce OmniXAI, an open-source Python library of eXplainable AI (XAI)
It offers omni-way explainable AI capabilities and various interpretable machine learning techniques.
For practitioners, the library provides an easy-to-use unified interface to generate the explanations for their applications.
arXiv Detail & Related papers (2022-06-01T11:35:37Z) - Foundations of Symbolic Languages for Model Interpretability [2.3361634876233817]
We study the computational complexity of FOIL queries over two classes of ML models often deemed to be easily interpretable.
We present a prototype implementation of FOIL wrapped in a high-level declarative language.
arXiv Detail & Related papers (2021-10-05T21:56:52Z) - An Information-Theoretic Approach to Personalized Explainable Machine
Learning [92.53970625312665]
We propose a simple probabilistic model for the predictions and user knowledge.
We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction.
arXiv Detail & Related papers (2020-03-01T13:06:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.