A Protocol for Intelligible Interaction Between Agents That Learn and
Explain
- URL: http://arxiv.org/abs/2301.01819v1
- Date: Wed, 4 Jan 2023 20:48:22 GMT
- Title: A Protocol for Intelligible Interaction Between Agents That Learn and
Explain
- Authors: Ashwin Srinivasan, Michael Bain, A. Baskar, Enrico Coiera
- Abstract summary: We view the interaction between humans and Machine Learning (ML) systems within the broader context of interaction between agents capable of learning and explanation.
We formulate two-way intelligibility as a property of a communication protocol.
- Score: 2.3300629798610455
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent engineering developments have seen the emergence of Machine Learning
(ML) as a powerful form of data analysis with widespread applicability beyond
its historical roots in the design of autonomous agents. However, relatively
little attention has been paid to the interaction between people and ML
systems. Recent developments on Explainable ML address this by providing visual
and textual information on how the ML system arrived at a conclusion. In this
paper we view the interaction between humans and ML systems within the broader
context of interaction between agents capable of learning and explanation.
Within this setting, we argue that it is more helpful to view the interaction
as characterised by two-way intelligibility of information rather than once-off
explanation of a prediction. We formulate two-way intelligibility as a property
of a communication protocol. Development of the protocol is motivated by a set
of `Intelligibility Axioms' for decision-support systems that use ML with a
human-in-the-loop. The axioms are intended as sufficient criteria to claim
that: (a) information provided by a human is intelligible to an ML system; and
(b) information provided by an ML system is intelligible to a human. The axioms
inform the design of a general synchronous interaction model between agents
capable of learning and explanation. We identify conditions of compatibility
between agents that result in bounded communication, and define Weak and Strong
Two-Way Intelligibility between agents as properties of the communication
protocol.
Related papers
- AntEval: Evaluation of Social Interaction Competencies in LLM-Driven
Agents [65.16893197330589]
Large Language Models (LLMs) have demonstrated their ability to replicate human behaviors across a wide range of scenarios.
However, their capability in handling complex, multi-character social interactions has yet to be fully explored.
We introduce the Multi-Agent Interaction Evaluation Framework (AntEval), encompassing a novel interaction framework and evaluation methods.
arXiv Detail & Related papers (2024-01-12T11:18:00Z) - Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Requirements for Explainability and Acceptance of Artificial
Intelligence in Collaborative Work [0.0]
The present structured literature analysis examines the requirements for the explainability and acceptance of AI.
Results indicate that the two main groups of users are developers who require information about the internal operations of the model.
The acceptance of AI systems depends on information about the system's functions and performance, privacy and ethical considerations.
arXiv Detail & Related papers (2023-06-27T11:36:07Z) - Neuro-Symbolic Causal Reasoning Meets Signaling Game for Emergent
Semantic Communications [71.63189900803623]
A novel emergent SC system framework is proposed and is composed of a signaling game for emergent language design and a neuro-symbolic (NeSy) artificial intelligence (AI) approach for causal reasoning.
The ESC system is designed to enhance the novel metrics of semantic information, reliability, distortion and similarity.
arXiv Detail & Related papers (2022-10-21T15:33:37Z) - Semantic Interactive Learning for Text Classification: A Constructive
Approach for Contextual Interactions [0.0]
We propose a novel interaction framework called Semantic Interactive Learning for the text domain.
We frame the problem of incorporating constructive and contextual feedback into the learner as a task to find an architecture that enables more semantic alignment between humans and machines.
We introduce a technique called SemanticPush that is effective for translating conceptual corrections of humans to non-extrapolating training examples.
arXiv Detail & Related papers (2022-09-07T08:13:45Z) - One-way Explainability Isn't The Message [2.618757282404254]
We argue that requirements on both human and machine in this context are significantly different.
The design of such human-machine systems should be driven by repeated, two-way intelligibility of information.
We propose operational principles -- we call them Intelligibility Axioms -- to guide the design of a collaborative decision-support system.
arXiv Detail & Related papers (2022-05-05T09:15:53Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Learning Multi-Agent Coordination through Connectivity-driven
Communication [7.462336024223669]
In artificial multi-agent systems, the ability to learn collaborative policies is predicated upon the agents' communication skills.
We present a deep reinforcement learning approach, Connectivity Driven Communication (CDC)
CDC is able to learn effective collaborative policies and can over-perform competing learning algorithms on cooperative navigation tasks.
arXiv Detail & Related papers (2020-02-12T20:58:33Z) - Emergence of Pragmatics from Referential Game between Theory of Mind
Agents [64.25696237463397]
We propose an algorithm, using which agents can spontaneously learn the ability to "read between lines" without any explicit hand-designed rules.
We integrate the theory of mind (ToM) in a cooperative multi-agent pedagogical situation and propose an adaptive reinforcement learning (RL) algorithm to develop a communication protocol.
arXiv Detail & Related papers (2020-01-21T19:37:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.