Explanatory machine learning for sequential human teaching
- URL: http://arxiv.org/abs/2205.10250v2
- Date: Sun, 26 Mar 2023 02:30:14 GMT
- Title: Explanatory machine learning for sequential human teaching
- Authors: Lun Ai and Johannes Langer and Stephen H. Muggleton and Ute Schmid
- Abstract summary: We show that sequential teaching of concepts with increasing complexity has a beneficial effect on human comprehension.
We propose a framework for the effects of sequential teaching on comprehension based on an existing definition of comprehensibility.
- Score: 5.706360286474043
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The topic of comprehensibility of machine-learned theories has recently drawn
increasing attention. Inductive Logic Programming (ILP) uses logic programming
to derive logic theories from small data based on abduction and induction
techniques. Learned theories are represented in the form of rules as
declarative descriptions of obtained knowledge. In earlier work, the authors
provided the first evidence of a measurable increase in human comprehension
based on machine-learned logic rules for simple classification tasks. In a
later study, it was found that the presentation of machine-learned explanations
to humans can produce both beneficial and harmful effects in the context of
game learning. We continue our investigation of comprehensibility by examining
the effects of the ordering of concept presentations on human comprehension. In
this work, we examine the explanatory effects of curriculum order and the
presence of machine-learned explanations for sequential problem-solving. We
show that 1) there exist tasks A and B such that learning A before B has a
better human comprehension with respect to learning B before A and 2) there
exist tasks A and B such that the presence of explanations when learning A
contributes to improved human comprehension when subsequently learning B. We
propose a framework for the effects of sequential teaching on comprehension
based on an existing definition of comprehensibility and provide evidence for
support from data collected in human trials. Empirical results show that
sequential teaching of concepts with increasing complexity a) has a beneficial
effect on human comprehension and b) leads to human re-discovery of
divide-and-conquer problem-solving strategies, and c) studying machine-learned
explanations allows adaptations of human problem-solving strategy with better
performance.
Related papers
- An Incomplete Loop: Deductive, Inductive, and Abductive Learning in Large Language Models [99.31449616860291]
Modern language models (LMs) can learn to perform new tasks in different ways.
In instruction following, the target task is described explicitly in natural language; in few-shot prompting, the task is specified implicitly.
In instruction inference, LMs are presented with in-context examples and are then prompted to generate a natural language task description.
arXiv Detail & Related papers (2024-04-03T19:31:56Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - In-Context Analogical Reasoning with Pre-Trained Language Models [10.344428417489237]
We explore the use of intuitive language-based abstractions to support analogy in AI systems.
Specifically, we apply large pre-trained language models (PLMs) to visual Raven's Progressive Matrices ( RPM)
We find that PLMs exhibit a striking capacity for zero-shot relational reasoning, exceeding human performance and nearing supervised vision-based methods.
arXiv Detail & Related papers (2023-05-28T04:22:26Z) - Machine Explanations and Human Understanding [31.047297225560566]
Explanations are hypothesized to improve human understanding of machine learning models.
empirical studies have found mixed and even negative results.
We show how human intuitions play a central role in enabling human understanding.
arXiv Detail & Related papers (2022-02-08T19:00:38Z) - A Human-Centered Interpretability Framework Based on Weight of Evidence [26.94750208505883]
We take a human-centered approach to interpretable machine learning.
We propose a list of design principles for machine-generated explanations meaningful to humans.
We show that this method can be adapted to handle high-dimensional, multi-class settings.
arXiv Detail & Related papers (2021-04-27T16:13:35Z) - Abduction and Argumentation for Explainable Machine Learning: A Position
Survey [2.28438857884398]
This paper presents Abduction and Argumentation as two principled forms for reasoning.
It fleshes out the fundamental role that they can play within Machine Learning.
arXiv Detail & Related papers (2020-10-24T13:23:44Z) - Bongard-LOGO: A New Benchmark for Human-Level Concept Learning and
Reasoning [78.13740873213223]
Bongard problems (BPs) were introduced as an inspirational challenge for visual cognition in intelligent systems.
We propose a new benchmark Bongard-LOGO for human-level concept learning and reasoning.
arXiv Detail & Related papers (2020-10-02T03:19:46Z) - Beneficial and Harmful Explanatory Machine Learning [5.223556562214077]
This paper investigates the explanatory effects of a machine learned theory in the context of simple two person games.
It proposes a framework for identifying the harmfulness of machine explanations based on the Cognitive Science literature.
arXiv Detail & Related papers (2020-09-09T19:14:38Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.