A Model for Intelligible Interaction Between Agents That Predict and Explain
- URL: http://arxiv.org/abs/2301.01819v2
- Date: Sun, 27 Oct 2024 07:08:57 GMT
- Title: A Model for Intelligible Interaction Between Agents That Predict and Explain
- Authors: A. Baskar, Ashwin Srinivasan, Michael Bain, Enrico Coiera,
- Abstract summary: We formalise the interaction model by taking agents to be automata with some special characteristics.
We define One- and Two-Way Intelligibility as properties that emerge at run-time by execution of the protocol.
We demonstrate using the formal model to: (a) identify instances of One- and Two-Way Intelligibility in literature reports on humans interacting with ML systems providing logic-based explanations, as is done in Inductive Logic Programming (ILP); and (b) map interactions between humans and machines in an elaborate natural-language based dialogue-model to One- or Two-Way Intellig
- Score: 1.335664823620186
- License:
- Abstract: Machine Learning (ML) has emerged as a powerful form of data modelling with widespread applicability beyond its roots in the design of autonomous agents. However, relatively little attention has been paid to the interaction between people and ML systems. In this paper we view interaction between humans and ML systems within the broader context of communication between agents capable of prediction and explanation. We formalise the interaction model by taking agents to be automata with some special characteristics and define a protocol for communication between such agents. We define One- and Two-Way Intelligibility as properties that emerge at run-time by execution of the protocol. The formalisation allows us to identify conditions under which run-time sequences are bounded, and identify conditions under which the protocol can correctly implement an axiomatic specification of intelligible interaction between a human and an ML system. We also demonstrate using the formal model to: (a) identify instances of One- and Two-Way Intelligibility in literature reports on humans interacting with ML systems providing logic-based explanations, as is done in Inductive Logic Programming (ILP); and (b) map interactions between humans and machines in an elaborate natural-language based dialogue-model to One- or Two-Way Intelligible interactions in the formal model.
Related papers
- Implementation and Application of an Intelligibility Protocol for Interaction with an LLM [0.9187505256430948]
Our interest is in constructing interactive systems involving a human-expert interacting with a machine learning engine.
This is of relevance when addressing complex problems arising in areas of science, the environment, medicine and so on.
We present an algorithmic description of general-purpose implementation, and conduct preliminary experiments on its use in two different areas.
arXiv Detail & Related papers (2024-10-27T21:20:18Z) - Hello Again! LLM-powered Personalized Agent for Long-term Dialogue [63.65128176360345]
We introduce a model-agnostic framework, the Long-term Dialogue Agent (LD-Agent)
It incorporates three independently tunable modules dedicated to event perception, persona extraction, and response generation.
The effectiveness, generality, and cross-domain capabilities of LD-Agent are empirically demonstrated.
arXiv Detail & Related papers (2024-06-09T21:58:32Z) - Cognitive Evolutionary Learning to Select Feature Interactions for Recommender Systems [59.117526206317116]
We show that CELL can adaptively evolve into different models for different tasks and data.
Experiments on four real-world datasets demonstrate that CELL significantly outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2024-05-29T02:35:23Z) - Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - Unpacking Human-AI interactions: From interaction primitives to a design
space [6.778055454461106]
We show how these primitives can be combined into a set of interaction patterns.
The motivation behind this is to provide a compact generalisation of existing practices.
We discuss how this approach can be used towards a design space for Human-AI interactions.
arXiv Detail & Related papers (2024-01-10T12:27:18Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - 'What are you referring to?' Evaluating the Ability of Multi-Modal
Dialogue Models to Process Clarificational Exchanges [65.03196674816772]
Referential ambiguities arise in dialogue when a referring expression does not uniquely identify the intended referent for the addressee.
Addressees usually detect such ambiguities immediately and work with the speaker to repair it using meta-communicative, Clarification Exchanges (CE): a Clarification Request (CR) and a response.
Here, we argue that the ability to generate and respond to CRs imposes specific constraints on the architecture and objective functions of multi-modal, visually grounded dialogue models.
arXiv Detail & Related papers (2023-07-28T13:44:33Z) - An Interleaving Semantics of the Timed Concurrent Language for
Argumentation to Model Debates and Dialogue Games [0.0]
We propose a language for modelling concurrent interaction between agents.
Such a language exploits a shared memory used by the agents to communicate and reason on the acceptability of their beliefs.
We show how it can be used to model interactions such as debates and dialogue games taking place between intelligent agents.
arXiv Detail & Related papers (2023-06-13T10:41:28Z) - One-way Explainability Isn't The Message [2.618757282404254]
We argue that requirements on both human and machine in this context are significantly different.
The design of such human-machine systems should be driven by repeated, two-way intelligibility of information.
We propose operational principles -- we call them Intelligibility Axioms -- to guide the design of a collaborative decision-support system.
arXiv Detail & Related papers (2022-05-05T09:15:53Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - An active inference model of collective intelligence [0.0]
This paper posits a minimal agent-based model that simulates the relationship between local individual-level interaction and collective intelligence.
Results show that stepwise cognitive transitions increase system performance by providing complementary mechanisms for alignment between agents' local and global optima.
arXiv Detail & Related papers (2021-04-02T14:32:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.