OmniXAI: A Library for Explainable AI
- URL: http://arxiv.org/abs/2206.01612v1
- Date: Wed, 1 Jun 2022 11:35:37 GMT
- Title: OmniXAI: A Library for Explainable AI
- Authors: Wenzhuo Yang and Hung Le and Silvio Savarese and Steven C.H. Hoi
- Abstract summary: We introduce OmniXAI, an open-source Python library of eXplainable AI (XAI)
It offers omni-way explainable AI capabilities and various interpretable machine learning techniques.
For practitioners, the library provides an easy-to-use unified interface to generate the explanations for their applications.
- Score: 98.07381528393245
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce OmniXAI, an open-source Python library of eXplainable AI (XAI),
which offers omni-way explainable AI capabilities and various interpretable
machine learning techniques to address the pain points of understanding and
interpreting the decisions made by machine learning (ML) in practice. OmniXAI
aims to be a one-stop comprehensive library that makes explainable AI easy for
data scientists, ML researchers and practitioners who need explanation for
various types of data, models and explanation methods at different stages of ML
process (data exploration, feature engineering, model development, evaluation,
and decision-making, etc). In particular, our library includes a rich family of
explanation methods integrated in a unified interface, which supports multiple
data types (tabular data, images, texts, time-series), multiple types of ML
models (traditional ML in Scikit-learn and deep learning models in
PyTorch/TensorFlow), and a range of diverse explanation methods including
"model-specific" and "model-agnostic" ones (such as feature-attribution
explanation, counterfactual explanation, gradient-based explanation, etc). For
practitioners, the library provides an easy-to-use unified interface to
generate the explanations for their applications by only writing a few lines of
codes, and also a GUI dashboard for visualization of different explanations for
more insights about decisions. In this technical report, we present OmniXAI's
design principles, system architectures, and major functionalities, and also
demonstrate several example use cases across different types of data, tasks,
and models.
Related papers
- Deep Fast Machine Learning Utils: A Python Library for Streamlined Machine Learning Prototyping [0.0]
The Deep Fast Machine Learning Utils (DFMLU) library provides tools designed to automate and enhance aspects of machine learning processes.
DFMLU offers functionalities that support model development and data handling.
This manuscript presents an overview of DFMLU's functionalities, providing Python examples for each tool.
arXiv Detail & Related papers (2024-09-14T21:39:17Z) - LLMs for XAI: Future Directions for Explaining Explanations [50.87311607612179]
We focus on refining explanations computed using existing XAI algorithms.
Initial experiments and user study suggest that LLMs offer a promising way to enhance the interpretability and usability of XAI.
arXiv Detail & Related papers (2024-05-09T19:17:47Z) - Pyreal: A Framework for Interpretable ML Explanations [51.14710806705126]
Pyreal is a system for generating a variety of interpretable machine learning explanations.
Pyreal converts data and explanations between the feature spaces expected by the model, relevant explanation algorithms, and human users.
Our studies demonstrate that Pyreal generates more useful explanations than existing systems.
arXiv Detail & Related papers (2023-12-20T15:04:52Z) - FIND: A Function Description Benchmark for Evaluating Interpretability
Methods [86.80718559904854]
This paper introduces FIND (Function INterpretation and Description), a benchmark suite for evaluating automated interpretability methods.
FIND contains functions that resemble components of trained neural networks, and accompanying descriptions of the kind we seek to generate.
We evaluate methods that use pretrained language models to produce descriptions of function behavior in natural language and code.
arXiv Detail & Related papers (2023-09-07T17:47:26Z) - Declarative Reasoning on Explanations Using Constraint Logic Programming [12.039469573641217]
REASONX is an explanation method based on Constraint Logic Programming (CLP)
We present here the architecture of REASONX, which consists of a Python layer, closer to the user, and a CLP layer.
REASONX's core execution engine is a Prolog meta-program with declarative semantics in terms of logic theories.
arXiv Detail & Related papers (2023-09-01T12:31:39Z) - Xplique: A Deep Learning Explainability Toolbox [5.067377019157635]
We have developed Xplique: a software library for explainability.
It includes representative explainability methods as well as associated evaluation metrics.
The code is licensed under the MIT license and is freely available.
arXiv Detail & Related papers (2022-06-09T10:16:07Z) - Panoramic Learning with A Standardized Machine Learning Formalism [116.34627789412102]
This paper presents a standardized equation of the learning objective, that offers a unifying understanding of diverse ML algorithms.
It also provides guidance for mechanic design of new ML solutions, and serves as a promising vehicle towards panoramic learning with all experiences.
arXiv Detail & Related papers (2021-08-17T17:44:38Z) - An Information-Theoretic Approach to Personalized Explainable Machine
Learning [92.53970625312665]
We propose a simple probabilistic model for the predictions and user knowledge.
We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction.
arXiv Detail & Related papers (2020-03-01T13:06:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.