Dialogue Policies for Confusion Mitigation in Situated HRI
- URL: http://arxiv.org/abs/2208.09367v1
- Date: Fri, 19 Aug 2022 14:28:13 GMT
- Title: Dialogue Policies for Confusion Mitigation in Situated HRI
- Authors: Na Li and Robert Ross
- Abstract summary: People may become confused while interacting with robots due to communicative or even task-centred challenges.
We present our approach to a linguistic design of dialogue policies to build a dialogue framework to alleviate interlocutor confusion.
- Score: 6.997674465889922
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Confusion is a mental state triggered by cognitive disequilibrium that can
occur in many types of task-oriented interaction, including Human-Robot
Interaction (HRI). People may become confused while interacting with robots due
to communicative or even task-centred challenges. To build a smooth and
engaging HRI, it is insufficient for an agent to simply detect confusion;
instead, the system should aim to mitigate the situation. In light of this, in
this paper, we present our approach to a linguistic design of dialogue policies
to build a dialogue framework to alleviate interlocutor confusion. We also
outline our sketch and discuss challenges with respect to its
operationalisation.
Related papers
- Human-Robot Dialogue Annotation for Multi-Modal Common Ground [4.665414514091581]
We describe the development of symbolic representations annotated on human-robot dialogue data to make dimensions of meaning accessible to autonomous systems participating in collaborative, natural language dialogue, and to enable common ground with human partners.
A particular challenge for establishing common ground arises in remote dialogue, where a human and robot are engaged in a joint navigation and exploration task of an unfamiliar environment, but where the robot cannot immediately share high quality visual information due to limited communication constraints.
Within this paradigm, we capture propositional semantics and the illocutionary force of a single utterance within the dialogue through our Dialogue-AMR annotation, an augmentation of Abstract Meaning Representation
arXiv Detail & Related papers (2024-11-19T19:33:54Z) - Interactive Dialogue Agents via Reinforcement Learning on Hindsight Regenerations [58.65755268815283]
Many real dialogues are interactive, meaning an agent's utterances will influence their conversational partner, elicit information, or change their opinion.
We use this fact to rewrite and augment existing suboptimal data, and train via offline reinforcement learning (RL) an agent that outperforms both prompting and learning from unaltered human demonstrations.
Our results in a user study with real humans show that our approach greatly outperforms existing state-of-the-art dialogue agents.
arXiv Detail & Related papers (2024-11-07T21:37:51Z) - GOMA: Proactive Embodied Cooperative Communication via Goal-Oriented Mental Alignment [72.96949760114575]
We propose a novel cooperative communication framework, Goal-Oriented Mental Alignment (GOMA)
GOMA formulates verbal communication as a planning problem that minimizes the misalignment between parts of agents' mental states that are relevant to the goals.
We evaluate our approach against strong baselines in two challenging environments, Overcooked (a multiplayer game) and VirtualHome (a household simulator)
arXiv Detail & Related papers (2024-03-17T03:52:52Z) - Are cascade dialogue state tracking models speaking out of turn in
spoken dialogues? [1.786898113631979]
This paper proposes a comprehensive analysis of the errors of state of the art systems in complex settings such as Dialogue State Tracking.
Based on spoken MultiWoz, we identify that errors on non-categorical slots' values are essential to address in order to bridge the gap between spoken and chat-based dialogue systems.
arXiv Detail & Related papers (2023-11-03T08:45:22Z) - Emergent Communication in Interactive Sketch Question Answering [38.38087954142305]
Vision-based emergent communication (EC) aims to learn to communicate through sketches and demystify the evolution of human communication.
We first introduce a novel Interactive Sketch Question Answering (ISQA) task, where two collaborative players are interacting through sketches to answer a question about an image in a multi-round manner.
Our experimental results including human evaluation demonstrate that multi-round interactive mechanism facilitates targeted and efficient communication between intelligent agents with decent human interpretability.
arXiv Detail & Related papers (2023-10-24T08:00:20Z) - A Survey on Proactive Dialogue Systems: Problems, Methods, and Prospects [100.75759050696355]
We provide a comprehensive overview of the prominent problems and advanced designs for conversational agent's proactivity in different types of dialogues.
We discuss challenges that meet the real-world application needs but require a greater research focus in the future.
arXiv Detail & Related papers (2023-05-04T11:38:49Z) - Detecting Interlocutor Confusion in Situated Human-Avatar Dialogue: A
Pilot Study [8.452193618860356]
This paper studies a user-avatar dialogue scenario to study the manifestation of confusion and in the long term its mitigation.
We present a new definition of confusion that is particularly tailored to the requirements of intelligent conversational system development.
Three pre-trained deep learning models were deployed to estimate base emotion, head pose and eye gaze.
arXiv Detail & Related papers (2022-06-06T08:56:32Z) - A Simulated Experiment to Explore Robotic Dialogue Strategies for People
with Dementia [2.5412519393131974]
We propose a partially observable markov decision process (POMDP) model for the PwD-robot interaction in the context of repetitive questioning.
We used Q-learning to learn an adaptive conversation strategy towards PwDs with different cognitive capabilities and different engagement levels.
This may be a useful step towards the application of conversational social robots to cope with repetitive questioning in PwDs.
arXiv Detail & Related papers (2021-04-18T19:35:19Z) - Disambiguating Affective Stimulus Associations for Robot Perception and
Dialogue [67.89143112645556]
We provide a NICO robot with the ability to learn the associations between a perceived auditory stimulus and an emotional expression.
NICO is able to do this for both individual subjects and specific stimuli, with the aid of an emotion-driven dialogue system.
The robot is then able to use this information to determine a subject's enjoyment of perceived auditory stimuli in a real HRI scenario.
arXiv Detail & Related papers (2021-03-05T20:55:48Z) - Contextualized Attention-based Knowledge Transfer for Spoken
Conversational Question Answering [63.72278693825945]
Spoken conversational question answering (SCQA) requires machines to model complex dialogue flow.
We propose CADNet, a novel contextualized attention-based distillation approach.
We conduct extensive experiments on the Spoken-CoQA dataset and demonstrate that our approach achieves remarkable performance.
arXiv Detail & Related papers (2020-10-21T15:17:18Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.