What and How of Machine Learning Transparency: Building Bespoke
Explainability Tools with Interoperable Algorithmic Components
- URL: http://arxiv.org/abs/2209.03813v1
- Date: Thu, 8 Sep 2022 13:33:25 GMT
- Title: What and How of Machine Learning Transparency: Building Bespoke
Explainability Tools with Interoperable Algorithmic Components
- Authors: Kacper Sokol and Alexander Hepburn and Raul Santos-Rodriguez and Peter
Flach
- Abstract summary: This paper introduces a collection of hands-on training materials for explaining data-driven predictive models.
These resources cover the three core building blocks of this technique: interpretable representation composition, data sampling and explanation generation.
- Score: 77.87794937143511
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainability techniques for data-driven predictive models based on
artificial intelligence and machine learning algorithms allow us to better
understand the operation of such systems and help to hold them accountable. New
transparency approaches are developed at breakneck speed, enabling us to peek
inside these black boxes and interpret their decisions. Many of these
techniques are introduced as monolithic tools, giving the impression of
one-size-fits-all and end-to-end algorithms with limited customisability.
Nevertheless, such approaches are often composed of multiple interchangeable
modules that need to be tuned to the problem at hand to produce meaningful
explanations. This paper introduces a collection of hands-on training materials
-- slides, video recordings and Jupyter Notebooks -- that provide guidance
through the process of building and evaluating bespoke modular surrogate
explainers for tabular data. These resources cover the three core building
blocks of this technique: interpretable representation composition, data
sampling and explanation generation.
Related papers
- Topological Methods in Machine Learning: A Tutorial for Practitioners [4.297070083645049]
Topological Machine Learning (TML) is an emerging field that leverages techniques from algebraic topology to analyze complex data structures.
This tutorial provides a comprehensive introduction to two key TML techniques, persistent homology and the Mapper algorithm.
To enhance accessibility, we adopt a data-centric approach, enabling readers to gain hands-on experience applying these techniques to relevant tasks.
arXiv Detail & Related papers (2024-09-04T17:44:52Z) - Adversarial Attacks on the Interpretation of Neuron Activation
Maximization [70.5472799454224]
Activation-maximization approaches are used to interpret and analyze trained deep-learning models.
In this work, we consider the concept of an adversary manipulating a model for the purpose of deceiving the interpretation.
arXiv Detail & Related papers (2023-06-12T19:54:33Z) - Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - Helpful, Misleading or Confusing: How Humans Perceive Fundamental
Building Blocks of Artificial Intelligence Explanations [11.667611038005552]
We take a step back from sophisticated predictive algorithms and look into explainability of simple decision-making models.
We aim to assess how people perceive comprehensibility of their different representations.
This allows us to capture how diverse stakeholders judge intelligibility of fundamental concepts that more elaborate artificial intelligence explanations are built from.
arXiv Detail & Related papers (2023-03-02T03:15:35Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Explainability of Text Processing and Retrieval Methods: A Critical
Survey [1.5320737596132752]
This article provides a broad overview of research on the explainability and interpretability of natural language processing and information retrieval methods.
More specifically, we survey approaches that have been applied to explain word embeddings, sequence modeling, attention modules, transformers, BERT, and document ranking.
arXiv Detail & Related papers (2022-12-14T09:25:49Z) - FACT: Learning Governing Abstractions Behind Integer Sequences [7.895232155155041]
We introduce a novel view on the learning of concepts admitting complete finitary descriptions.
We lay down a set of benchmarking tasks aimed at conceptual understanding by machine learning models.
To further aid research in knowledge representation and reasoning, we present FACT, the Finitary Abstraction Toolkit.
arXiv Detail & Related papers (2022-09-20T08:20:03Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - One Explanation Does Not Fit All: The Promise of Interactive
Explanations for Machine Learning Transparency [21.58324172085553]
We discuss the promises of Interactive Machine Learning for improved transparency of black-box systems.
We show how to personalise counterfactual explanations by interactively adjusting their conditional statements.
We argue that adjusting the explanation itself and its content is more important.
arXiv Detail & Related papers (2020-01-27T13:10:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.