Machine Learning Explainability for External Stakeholders
- URL: http://arxiv.org/abs/2007.05408v1
- Date: Fri, 10 Jul 2020 14:27:06 GMT
- Title: Machine Learning Explainability for External Stakeholders
- Authors: Umang Bhatt, McKane Andrus, Adrian Weller, Alice Xiang
- Abstract summary: There have been growing calls to open the black box and to make machine learning algorithms more explainable.
We conducted a day-long workshop with academics, industry experts, legal scholars, and policymakers to develop a shared language around explainability.
We provide a short summary of various case studies of explainable machine learning, lessons from those studies, and discuss open challenges.
- Score: 27.677158604772238
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As machine learning is increasingly deployed in high-stakes contexts
affecting people's livelihoods, there have been growing calls to open the black
box and to make machine learning algorithms more explainable. Providing useful
explanations requires careful consideration of the needs of stakeholders,
including end-users, regulators, and domain experts. Despite this need, little
work has been done to facilitate inter-stakeholder conversation around
explainable machine learning. To help address this gap, we conducted a
closed-door, day-long workshop between academics, industry experts, legal
scholars, and policymakers to develop a shared language around explainability
and to understand the current shortcomings of and potential solutions for
deploying explainable machine learning in service of transparency goals. We
also asked participants to share case studies in deploying explainable machine
learning at scale. In this paper, we provide a short summary of various case
studies of explainable machine learning, lessons from those studies, and
discuss open challenges.
Related papers
- Machine Unlearning: A Survey [56.79152190680552]
A special need has arisen where, due to privacy, usability, and/or the right to be forgotten, information about some specific samples needs to be removed from a model, called machine unlearning.
This emerging technology has drawn significant interest from both academics and industry due to its innovation and practicality.
No study has analyzed this complex topic or compared the feasibility of existing unlearning solutions in different kinds of scenarios.
The survey concludes by highlighting some of the outstanding issues with unlearning techniques, along with some feasible directions for new research opportunities.
arXiv Detail & Related papers (2023-06-06T10:18:36Z) - Explainability in Machine Learning: a Pedagogical Perspective [9.393988089692947]
We provide a pedagogical perspective on how to structure the learning process to better impart knowledge to students and researchers in machine learning.
We discuss the advantages and disadvantages of various opaque and transparent machine learning models.
We will also discuss ways to structure potential assignments to best help students learn to use explainability as a tool alongside any given machine learning application.
arXiv Detail & Related papers (2022-02-21T16:15:57Z) - Explainable Predictive Process Monitoring: A User Evaluation [62.41400549499849]
Explainability is motivated by the lack of transparency of black-box Machine Learning approaches.
We carry on a user evaluation on explanation approaches for Predictive Process Monitoring.
arXiv Detail & Related papers (2022-02-15T22:24:21Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Explainable Machine Learning with Prior Knowledge: An Overview [1.1045760002858451]
The complexity of machine learning models has elicited research to make them more explainable.
We propose to harness prior knowledge to improve upon the explanation capabilities of machine learning models.
arXiv Detail & Related papers (2021-05-21T07:33:22Z) - Explainable Machine Learning for Fraud Detection [0.47574189356217006]
The application of machine learning to support the processing of large datasets holds promise in many industries, including financial services.
In this paper, we explore explainability methods in the domain of real-time fraud detection by investigating the selection of appropriate background datasets and runtime trade-offs on both supervised and unsupervised models.
arXiv Detail & Related papers (2021-05-13T14:12:02Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - Counterfactual Explanations for Machine Learning: A Review [5.908471365011942]
We review and categorize research on counterfactual explanations in machine learning.
Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries.
arXiv Detail & Related papers (2020-10-20T20:08:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - One Explanation Does Not Fit All: The Promise of Interactive
Explanations for Machine Learning Transparency [21.58324172085553]
We discuss the promises of Interactive Machine Learning for improved transparency of black-box systems.
We show how to personalise counterfactual explanations by interactively adjusting their conditional statements.
We argue that adjusting the explanation itself and its content is more important.
arXiv Detail & Related papers (2020-01-27T13:10:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.