Beneficial and Harmful Explanatory Machine Learning
- URL: http://arxiv.org/abs/2009.06410v2
- Date: Thu, 25 Feb 2021 16:19:20 GMT
- Title: Beneficial and Harmful Explanatory Machine Learning
- Authors: Lun Ai and Stephen H. Muggleton and C\'eline Hocquette and Mark
Gromowski and Ute Schmid
- Abstract summary: This paper investigates the explanatory effects of a machine learned theory in the context of simple two person games.
It proposes a framework for identifying the harmfulness of machine explanations based on the Cognitive Science literature.
- Score: 5.223556562214077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given the recent successes of Deep Learning in AI there has been increased
interest in the role and need for explanations in machine learned theories. A
distinct notion in this context is that of Michie's definition of Ultra-Strong
Machine Learning (USML). USML is demonstrated by a measurable increase in human
performance of a task following provision to the human of a symbolic machine
learned theory for task performance. A recent paper demonstrates the beneficial
effect of a machine learned logic theory for a classification task, yet no
existing work to our knowledge has examined the potential harmfulness of
machine's involvement for human comprehension during learning. This paper
investigates the explanatory effects of a machine learned theory in the context
of simple two person games and proposes a framework for identifying the
harmfulness of machine explanations based on the Cognitive Science literature.
The approach involves a cognitive window consisting of two quantifiable bounds
and it is supported by empirical evidence collected from human trials. Our
quantitative and qualitative results indicate that human learning aided by a
symbolic machine learned theory which satisfies a cognitive window has achieved
significantly higher performance than human self learning. Results also
demonstrate that human learning aided by a symbolic machine learned theory that
fails to satisfy this window leads to significantly worse performance than
unaided human learning.
Related papers
- A Multimodal Automated Interpretability Agent [63.8551718480664]
MAIA is a system that uses neural models to automate neural model understanding tasks.
We first characterize MAIA's ability to describe (neuron-level) features in learned representations of images.
We then show that MAIA can aid in two additional interpretability tasks: reducing sensitivity to spurious features, and automatically identifying inputs likely to be mis-classified.
arXiv Detail & Related papers (2024-04-22T17:55:11Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Explanatory machine learning for sequential human teaching [5.706360286474043]
We show that sequential teaching of concepts with increasing complexity has a beneficial effect on human comprehension.
We propose a framework for the effects of sequential teaching on comprehension based on an existing definition of comprehensibility.
arXiv Detail & Related papers (2022-05-20T15:23:46Z) - Machine Explanations and Human Understanding [31.047297225560566]
Explanations are hypothesized to improve human understanding of machine learning models.
empirical studies have found mixed and even negative results.
We show how human intuitions play a central role in enabling human understanding.
arXiv Detail & Related papers (2022-02-08T19:00:38Z) - Quality Metrics for Transparent Machine Learning With and Without Humans
In the Loop Are Not Correlated [0.0]
We investigate the quality of interpretable computer vision algorithms using techniques from psychophysics.
Our results demonstrate that psychophysical experiments allow for robust quality assessment of transparency in machine learning.
arXiv Detail & Related papers (2021-07-01T12:30:51Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Five Points to Check when Comparing Visual Perception in Humans and
Machines [26.761191892051]
A growing amount of work is directed towards comparing information processing in humans and machines.
Here, we propose ideas on how to design, conduct and interpret experiments.
We demonstrate and apply these ideas through three case studies.
arXiv Detail & Related papers (2020-04-20T16:05:36Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.