Explainability in Machine Learning: a Pedagogical Perspective
- URL: http://arxiv.org/abs/2202.10335v1
- Date: Mon, 21 Feb 2022 16:15:57 GMT
- Title: Explainability in Machine Learning: a Pedagogical Perspective
- Authors: Andreas Bueff, Ioannis Papantonis, Auste Simkute, Vaishak Belle
- Abstract summary: We provide a pedagogical perspective on how to structure the learning process to better impart knowledge to students and researchers in machine learning.
We discuss the advantages and disadvantages of various opaque and transparent machine learning models.
We will also discuss ways to structure potential assignments to best help students learn to use explainability as a tool alongside any given machine learning application.
- Score: 9.393988089692947
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Given the importance of integrating of explainability into machine learning,
at present, there are a lack of pedagogical resources exploring this.
Specifically, we have found a need for resources in explaining how one can
teach the advantages of explainability in machine learning. Often pedagogical
approaches in the field of machine learning focus on getting students prepared
to apply various models in the real world setting, but much less attention is
given to teaching students the various techniques one could employ to explain a
model's decision-making process. Furthermore, explainability can benefit from a
narrative structure that aids one in understanding which techniques are
governed by which questions about the data.
We provide a pedagogical perspective on how to structure the learning process
to better impart knowledge to students and researchers in machine learning,
when and how to implement various explainability techniques as well as how to
interpret the results. We discuss a system of teaching explainability in
machine learning, by exploring the advantages and disadvantages of various
opaque and transparent machine learning models, as well as when to utilize
specific explainability techniques and the various frameworks used to structure
the tools for explainability. Among discussing concrete assignments, we will
also discuss ways to structure potential assignments to best help students
learn to use explainability as a tool alongside any given machine learning
application.
Data science professionals completing the course will have a birds-eye view
of a rapidly developing area and will be confident to deploy machine learning
more widely. A preliminary analysis on the effectiveness of a recently
delivered course following the structure presented here is included as evidence
supporting our pedagogical approach.
Related papers
- Beyond Model Interpretability: Socio-Structural Explanations in Machine Learning [5.159407277301709]
We argue that interpreting machine learning outputs in certain normatively salient domains could require appealing to a third type of explanation.
The relevance of this explanation type is motivated by the fact that machine learning models are not isolated entities but are embedded within and shaped by social structures.
arXiv Detail & Related papers (2024-09-05T15:47:04Z) - An effect analysis of the balancing techniques on the counterfactual explanations of student success prediction models [0.0]
One of the dominant research directions in learning analytics is predictive modeling of learners' success using various machine learning methods.
Several counterfactual generation methods hold much promise, but the features must be actionable and causal to be effective.
This paper analyzed the effectiveness of commonly used counterfactual generation methods, such as WhatIf Counterfactual Explanations, Multi-Objective Counterfactual Explanations, and Nearest Instance Counterfactual Explanations.
arXiv Detail & Related papers (2024-08-01T16:19:08Z) - What and How of Machine Learning Transparency: Building Bespoke
Explainability Tools with Interoperable Algorithmic Components [77.87794937143511]
This paper introduces a collection of hands-on training materials for explaining data-driven predictive models.
These resources cover the three core building blocks of this technique: interpretable representation composition, data sampling and explanation generation.
arXiv Detail & Related papers (2022-09-08T13:33:25Z) - Learning Knowledge Representation with Meta Knowledge Distillation for
Single Image Super-Resolution [82.89021683451432]
We propose a model-agnostic meta knowledge distillation method under the teacher-student architecture for the single image super-resolution task.
Experiments conducted on various single image super-resolution datasets demonstrate that our proposed method outperforms existing defined knowledge representation related distillation methods.
arXiv Detail & Related papers (2022-07-18T02:41:04Z) - Explainable Predictive Process Monitoring: A User Evaluation [62.41400549499849]
Explainability is motivated by the lack of transparency of black-box Machine Learning approaches.
We carry on a user evaluation on explanation approaches for Predictive Process Monitoring.
arXiv Detail & Related papers (2022-02-15T22:24:21Z) - Explainable Machine Learning with Prior Knowledge: An Overview [1.1045760002858451]
The complexity of machine learning models has elicited research to make them more explainable.
We propose to harness prior knowledge to improve upon the explanation capabilities of machine learning models.
arXiv Detail & Related papers (2021-05-21T07:33:22Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - Counterfactual Explanations for Machine Learning: A Review [5.908471365011942]
We review and categorize research on counterfactual explanations in machine learning.
Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries.
arXiv Detail & Related papers (2020-10-20T20:08:42Z) - Machine Learning Explainability for External Stakeholders [27.677158604772238]
There have been growing calls to open the black box and to make machine learning algorithms more explainable.
We conducted a day-long workshop with academics, industry experts, legal scholars, and policymakers to develop a shared language around explainability.
We provide a short summary of various case studies of explainable machine learning, lessons from those studies, and discuss open challenges.
arXiv Detail & Related papers (2020-07-10T14:27:06Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.