One-way Explainability Isn't The Message
- URL: http://arxiv.org/abs/2205.08954v1
- Date: Thu, 5 May 2022 09:15:53 GMT
- Title: One-way Explainability Isn't The Message
- Authors: Ashwin Srinivasan and Michael Bain and Enrico Coiera
- Abstract summary: We argue that requirements on both human and machine in this context are significantly different.
The design of such human-machine systems should be driven by repeated, two-way intelligibility of information.
We propose operational principles -- we call them Intelligibility Axioms -- to guide the design of a collaborative decision-support system.
- Score: 2.618757282404254
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent engineering developments in specialised computational hardware,
data-acquisition and storage technology have seen the emergence of Machine
Learning (ML) as a powerful form of data analysis with widespread applicability
beyond its historical roots in the design of autonomous agents. However --
possibly because of its origins in the development of agents capable of
self-discovery -- relatively little attention has been paid to the interaction
between people and ML. In this paper we are concerned with the use of ML in
automated or semi-automated tools that assist one or more human decision
makers. We argue that requirements on both human and machine in this context
are significantly different to the use of ML either as part of autonomous
agents for self-discovery or as part statistical data analysis. Our principal
position is that the design of such human-machine systems should be driven by
repeated, two-way intelligibility of information rather than one-way
explainability of the ML-system's recommendations. Iterated rounds of
intelligible information exchange, we think, will characterise the kinds of
collaboration that will be needed to understand complex phenomena for which
neither man or machine have complete answers. We propose operational principles
-- we call them Intelligibility Axioms -- to guide the design of a
collaborative decision-support system. The principles are concerned with: (a)
what it means for information provided by the human to be intelligible to the
ML system; and (b) what it means for an explanation provided by an ML system to
be intelligible to a human. Using examples from the literature on the use of ML
for drug-design and in medicine, we demonstrate cases where the conditions of
the axioms are met. We describe some additional requirements needed for the
design of a truly collaborative decision-support system.
Related papers
- Automated Process Planning Based on a Semantic Capability Model and SMT [50.76251195257306]
In research of manufacturing systems and autonomous robots, the term capability is used for a machine-interpretable specification of a system function.
We present an approach that combines these two topics: starting from a semantic capability model, an AI planning problem is automatically generated.
arXiv Detail & Related papers (2023-12-14T10:37:34Z) - OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge
Collaborative AutoML System [85.8338446357469]
We introduce OmniForce, a human-centered AutoML system that yields both human-assisted ML and ML-assisted human techniques.
We show how OmniForce can put an AutoML system into practice and build adaptive AI in open-environment scenarios.
arXiv Detail & Related papers (2023-03-01T13:35:22Z) - A Model for Intelligible Interaction Between Agents That Predict and Explain [1.335664823620186]
We formalise the interaction model by taking agents to be automata with some special characteristics.
We define One- and Two-Way Intelligibility as properties that emerge at run-time by execution of the protocol.
We demonstrate using the formal model to: (a) identify instances of One- and Two-Way Intelligibility in literature reports on humans interacting with ML systems providing logic-based explanations, as is done in Inductive Logic Programming (ILP); and (b) map interactions between humans and machines in an elaborate natural-language based dialogue-model to One- or Two-Way Intellig
arXiv Detail & Related papers (2023-01-04T20:48:22Z) - Interpretability and accessibility of machine learning in selected food
processing, agriculture and health applications [0.0]
Lack of interpretability of ML based systems is a major hindrance to widespread adoption of these powerful algorithms.
New techniques are emerging to improve ML accessibility through automated model design.
This paper provides a review of the work done to improve interpretability and accessibility of machine learning in the context of global problems.
arXiv Detail & Related papers (2022-11-30T02:44:13Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z) - An Objective Metric for Explainable AI: How and Why to Estimate the
Degree of Explainability [3.04585143845864]
We present a new model-agnostic metric to measure the Degree of eXplainability of correct information in an objective way.
We designed a few experiments and a user-study on two realistic AI-based systems for healthcare and finance.
arXiv Detail & Related papers (2021-09-11T17:44:13Z) - Automated Machine Learning, Bounded Rationality, and Rational
Metareasoning [62.997667081978825]
We will look at automated machine learning (AutoML) and related problems from the perspective of bounded rationality.
Taking actions under bounded resources requires an agent to reflect on how to use these resources in an optimal way.
arXiv Detail & Related papers (2021-09-10T09:10:20Z) - Understanding the Usability Challenges of Machine Learning In
High-Stakes Decision Making [67.72855777115772]
Machine learning (ML) is being applied to a diverse and ever-growing set of domains.
In many cases, domain experts -- who often have no expertise in ML or data science -- are asked to use ML predictions to make high-stakes decisions.
We investigate the ML usability challenges present in the domain of child welfare screening through a series of collaborations with child welfare screeners.
arXiv Detail & Related papers (2021-03-02T22:50:45Z) - The Role of Individual User Differences in Interpretable and Explainable
Machine Learning Systems [0.3169089186688223]
We study how individual skills and personality traits predict interpretability, explainability, and knowledge discovery from machine learning generated model output.
Our work relies on Fuzzy Trace Theory, a leading theory of how humans process numerical stimuli.
arXiv Detail & Related papers (2020-09-14T18:15:00Z) - Explainable Empirical Risk Minimization [0.6299766708197883]
Successful application of machine learning (ML) methods becomes increasingly dependent on their interpretability or explainability.
This paper applies information-theoretic concepts to develop a novel measure for the subjective explainability of predictions delivered by a ML method.
Our main contribution is the explainable empirical risk minimization (EERM) principle of learning a hypothesis that optimally balances between the subjective explainability and risk.
arXiv Detail & Related papers (2020-09-03T07:16:34Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.