Machine Guides, Human Supervises: Interactive Learning with Global
Explanations
- URL: http://arxiv.org/abs/2009.09723v1
- Date: Mon, 21 Sep 2020 09:55:30 GMT
- Title: Machine Guides, Human Supervises: Interactive Learning with Global
Explanations
- Authors: Teodora Popordanoska, Mohit Kumar, Stefano Teso
- Abstract summary: We introduce explanatory guided learning (XGL), a novel interactive learning strategy.
XGL is designed to be robust against cases in which the explanations supplied by the machine oversell the classifier's quality.
By drawing a link to interactive machine teaching, we show theoretically that global explanations are a viable approach for guiding supervisors.
- Score: 11.112120925113627
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce explanatory guided learning (XGL), a novel interactive learning
strategy in which a machine guides a human supervisor toward selecting
informative examples for a classifier. The guidance is provided by means of
global explanations, which summarize the classifier's behavior on different
regions of the instance space and expose its flaws. Compared to other
explanatory interactive learning strategies, which are machine-initiated and
rely on local explanations, XGL is designed to be robust against cases in which
the explanations supplied by the machine oversell the classifier's quality.
Moreover, XGL leverages global explanations to open up the black-box of
human-initiated interaction, enabling supervisors to select informative
examples that challenge the learned model. By drawing a link to interactive
machine teaching, we show theoretically that global explanations are a viable
approach for guiding supervisors. Our simulations show that explanatory guided
learning avoids overselling the model's quality and performs comparably or
better than machine- and human-initiated interactive learning strategies in
terms of model quality.
Related papers
- Diffexplainer: Towards Cross-modal Global Explanations with Diffusion Models [51.21351775178525]
DiffExplainer is a novel framework that, leveraging language-vision models, enables multimodal global explainability.
It employs diffusion models conditioned on optimized text prompts, synthesizing images that maximize class outputs.
The analysis of generated visual descriptions allows for automatic identification of biases and spurious features.
arXiv Detail & Related papers (2024-04-03T10:11:22Z) - Revisiting Self-supervised Learning of Speech Representation from a
Mutual Information Perspective [68.20531518525273]
We take a closer look into existing self-supervised methods of speech from an information-theoretic perspective.
We use linear probes to estimate the mutual information between the target information and learned representations.
We explore the potential of evaluating representations in a self-supervised fashion, where we estimate the mutual information between different parts of the data without using any labels.
arXiv Detail & Related papers (2024-01-16T21:13:22Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Leveraging Explanations in Interactive Machine Learning: An Overview [10.284830265068793]
Explanations have gained an increasing level of interest in the AI and Machine Learning (ML) communities.
This paper presents an overview of research where explanations are combined with interactive capabilities.
arXiv Detail & Related papers (2022-07-29T07:46:11Z) - Homomorphism Autoencoder -- Learning Group Structured Representations from Observed Transitions [51.71245032890532]
We propose methods enabling an agent acting upon the world to learn internal representations of sensory information consistent with actions that modify it.
In contrast to existing work, our approach does not require prior knowledge of the group and does not restrict the set of actions the agent can perform.
arXiv Detail & Related papers (2022-07-25T11:22:48Z) - DECE: Decision Explorer with Counterfactual Explanations for Machine
Learning Models [36.50754934147469]
We exploit the potential of counterfactual explanations to understand and explore the behavior of machine learning models.
We design DECE, an interactive visualization system that helps understand and explore a model's decisions on individual instances and data subsets.
arXiv Detail & Related papers (2020-08-19T09:44:47Z) - Toward Machine-Guided, Human-Initiated Explanatory Interactive Learning [9.887110107270196]
Recent work has demonstrated the promise of combining local explanations with active learning for understanding and supervising black-box models.
Here we show that, under specific conditions, these algorithms may misrepresent the quality of the model being learned.
We address this narrative bias by introducing explanatory guided learning.
arXiv Detail & Related papers (2020-07-20T11:51:31Z) - The Grammar of Interactive Explanatory Model Analysis [7.812073412066698]
We show how different Explanatory Model Analysis (EMA) methods complement each other.
We formalize the grammar of IEMA to describe potential human-model dialogues.
IEMA is implemented in a widely used human-centered open-source software framework.
arXiv Detail & Related papers (2020-05-01T17:12:22Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.