Investigating Human Response, Behaviour, and Preference in Joint-Task
Interaction
- URL: http://arxiv.org/abs/2011.14016v1
- Date: Fri, 27 Nov 2020 22:16:59 GMT
- Title: Investigating Human Response, Behaviour, and Preference in Joint-Task
Interaction
- Authors: Alan Lindsay, Bart Craenen, Sara Dalzel-Job, Robin L. Hill, Ronald P.
A. Petrick
- Abstract summary: We have designed an experiment in order to examine human behaviour and response as they interact with Explainable Planning (XAIP) agents.
We also present the results from an empirical analysis where we examined the behaviour of the two agents for simulated users.
- Score: 3.774610219328564
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Human interaction relies on a wide range of signals, including non-verbal
cues. In order to develop effective Explainable Planning (XAIP) agents it is
important that we understand the range and utility of these communication
channels. Our starting point is existing results from joint task interaction
and their study in cognitive science. Our intention is that these lessons can
inform the design of interaction agents -- including those using planning
techniques -- whose behaviour is conditioned on the user's response, including
affective measures of the user (i.e., explicitly incorporating the user's
affective state within the planning model). We have identified several concepts
at the intersection of plan-based agent behaviour and joint task interaction
and have used these to design two agents: one reactive and the other partially
predictive. We have designed an experiment in order to examine human behaviour
and response as they interact with these agents. In this paper we present the
designed study and the key questions that are being investigated. We also
present the results from an empirical analysis where we examined the behaviour
of the two agents for simulated users.
Related papers
- Implementation and Application of an Intelligibility Protocol for Interaction with an LLM [0.9187505256430948]
Our interest is in constructing interactive systems involving a human-expert interacting with a machine learning engine.
This is of relevance when addressing complex problems arising in areas of science, the environment, medicine and so on.
We present an algorithmic description of general-purpose implementation, and conduct preliminary experiments on its use in two different areas.
arXiv Detail & Related papers (2024-10-27T21:20:18Z) - AntEval: Evaluation of Social Interaction Competencies in LLM-Driven
Agents [65.16893197330589]
Large Language Models (LLMs) have demonstrated their ability to replicate human behaviors across a wide range of scenarios.
However, their capability in handling complex, multi-character social interactions has yet to be fully explored.
We introduce the Multi-Agent Interaction Evaluation Framework (AntEval), encompassing a novel interaction framework and evaluation methods.
arXiv Detail & Related papers (2024-01-12T11:18:00Z) - Disentangled Interaction Representation for One-Stage Human-Object
Interaction Detection [70.96299509159981]
Human-Object Interaction (HOI) detection is a core task for human-centric image understanding.
Recent one-stage methods adopt a transformer decoder to collect image-wide cues that are useful for interaction prediction.
Traditional two-stage methods benefit significantly from their ability to compose interaction features in a disentangled and explainable manner.
arXiv Detail & Related papers (2023-12-04T08:02:59Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - AgentCF: Collaborative Learning with Autonomous Language Agents for
Recommender Systems [112.76941157194544]
We propose AgentCF for simulating user-item interactions in recommender systems through agent-based collaborative filtering.
We creatively consider not only users but also items as agents, and develop a collaborative learning approach that optimize both kinds of agents together.
Overall, the optimized agents exhibit diverse interaction behaviors within our framework, including user-item, user-user, item-item, and collective interactions.
arXiv Detail & Related papers (2023-10-13T16:37:14Z) - Automatic Context-Driven Inference of Engagement in HMI: A Survey [6.479224589451863]
This paper presents a survey on engagement inference for human-machine interaction.
It entails interdisciplinary definition, engagement components and factors, publicly available datasets, ground truth assessment, and most commonly used features and methods.
It serves as a guide for the development of future human-machine interaction interfaces with reliable context-aware engagement inference capability.
arXiv Detail & Related papers (2022-09-30T10:46:13Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - CogIntAc: Modeling the Relationships between Intention, Emotion and
Action in Interactive Process from Cognitive Perspective [15.797390372732973]
We propose a novel cognitive framework of individual interaction.
The core of the framework is that individuals achieve interaction through external action driven by their inner intention.
arXiv Detail & Related papers (2022-05-07T03:54:51Z) - Assessing Human Interaction in Virtual Reality With Continually Learning
Prediction Agents Based on Reinforcement Learning Algorithms: A Pilot Study [6.076137037890219]
We investigate how the interaction between a human and a continually learning prediction agent develops as the agent develops competency.
We develop a virtual reality environment and a time-based prediction task wherein learned predictions from a reinforcement learning (RL) algorithm augment human predictions.
Our findings suggest that human trust of the system may be influenced by early interactions with the agent, and that trust in turn affects strategic behaviour.
arXiv Detail & Related papers (2021-12-14T22:46:44Z) - Learning Proxemic Behavior Using Reinforcement Learning with Cognitive
Agents [1.0635883951034306]
Proxemics is a branch of non-verbal communication concerned with studying the spatial behavior of people and animals.
We study how agents behave in environments based on proxemic behavior.
arXiv Detail & Related papers (2021-08-08T20:45:34Z) - SPA: Verbal Interactions between Agents and Avatars in Shared Virtual
Environments using Propositional Planning [61.335252950832256]
Sense-Plan-Ask, or SPA, generates plausible verbal interactions between virtual human-like agents and user avatars in shared virtual environments.
We find that our algorithm creates a small runtime cost and enables agents to complete their goals more effectively than agents without the ability to leverage natural-language communication.
arXiv Detail & Related papers (2020-02-08T23:15:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.