A Role-Selected Sharing Network for Joint Machine-Human Chatting Handoff
and Service Satisfaction Analysis
- URL: http://arxiv.org/abs/2109.08412v1
- Date: Fri, 17 Sep 2021 08:39:45 GMT
- Title: A Role-Selected Sharing Network for Joint Machine-Human Chatting Handoff
and Service Satisfaction Analysis
- Authors: Jiawei Liu, Kaisong Song, Yangyang Kang, Guoxiu He, Zhuoren Jiang,
Changlong Sun, Wei Lu, Xiaozhong Liu
- Abstract summary: We propose a novel model, Role-Selected Sharing Network ( RSSN), which integrates dialogue satisfaction estimation and handoff prediction in one multi-task learning framework.
Unlike prior efforts in dialog mining, by utilizing local user satisfaction as a bridge, global satisfaction detector and handoff predictor can effectively exchange critical information.
- Score: 35.937850808046456
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Chatbot is increasingly thriving in different domains, however, because of
unexpected discourse complexity and training data sparseness, its potential
distrust hatches vital apprehension. Recently, Machine-Human Chatting Handoff
(MHCH), predicting chatbot failure and enabling human-algorithm collaboration
to enhance chatbot quality, has attracted increasing attention from industry
and academia. In this study, we propose a novel model, Role-Selected Sharing
Network (RSSN), which integrates both dialogue satisfaction estimation and
handoff prediction in one multi-task learning framework. Unlike prior efforts
in dialog mining, by utilizing local user satisfaction as a bridge, global
satisfaction detector and handoff predictor can effectively exchange critical
information. Specifically, we decouple the relation and interaction between the
two tasks by the role information after the shared encoder. Extensive
experiments on two public datasets demonstrate the effectiveness of our model.
Related papers
- Unveiling the Impact of Multi-Modal Interactions on User Engagement: A Comprehensive Evaluation in AI-driven Conversations [17.409790984399052]
This paper explores the impact of multi-modal interactions, which incorporate images and audio alongside text, on user engagement.
Our findings reveal a significant enhancement in user engagement with multi-modal interactions compared to text-only dialogues.
Results suggest that multi-modal interactions optimize cognitive processing and facilitate richer information comprehension.
arXiv Detail & Related papers (2024-06-21T09:26:55Z) - A Multi-Modal Explainability Approach for Human-Aware Robots in Multi-Party Conversation [39.87346821309096]
We present an addressee estimation model with improved performance in comparison with the previous SOTA.
We also propose several ways to incorporate explainability and transparency in the aforementioned architecture.
arXiv Detail & Related papers (2024-05-20T13:09:32Z) - AntEval: Evaluation of Social Interaction Competencies in LLM-Driven
Agents [65.16893197330589]
Large Language Models (LLMs) have demonstrated their ability to replicate human behaviors across a wide range of scenarios.
However, their capability in handling complex, multi-character social interactions has yet to be fully explored.
We introduce the Multi-Agent Interaction Evaluation Framework (AntEval), encompassing a novel interaction framework and evaluation methods.
arXiv Detail & Related papers (2024-01-12T11:18:00Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - Qualitative Prediction of Multi-Agent Spatial Interactions [5.742409080817885]
We present and benchmark three new approaches to model and predict multi-agent interactions in dense scenes.
The proposed solutions take into account static and dynamic context to predict individual interactions.
They exploit an input- and a temporal-attention mechanism, and are tested on medium and long-term time horizons.
arXiv Detail & Related papers (2023-06-30T18:08:25Z) - Automatic Context-Driven Inference of Engagement in HMI: A Survey [6.479224589451863]
This paper presents a survey on engagement inference for human-machine interaction.
It entails interdisciplinary definition, engagement components and factors, publicly available datasets, ground truth assessment, and most commonly used features and methods.
It serves as a guide for the development of future human-machine interaction interfaces with reliable context-aware engagement inference capability.
arXiv Detail & Related papers (2022-09-30T10:46:13Z) - Re-entry Prediction for Online Conversations via Self-Supervised
Learning [25.488783376789026]
We propose three auxiliary tasks, namely, Spread Pattern, Repeated Target user, and Turn Authorship, as the self-supervised signals for re-entry prediction.
Experimental results on two datasets newly collected from Twitter and Reddit show that our method outperforms the previous state-of-the-arts.
arXiv Detail & Related papers (2021-09-05T08:07:52Z) - Human Trajectory Forecasting in Crowds: A Deep Learning Perspective [89.4600982169]
We present an in-depth analysis of existing deep learning-based methods for modelling social interactions.
We propose two knowledge-based data-driven methods to effectively capture these social interactions.
We develop a large scale interaction-centric benchmark TrajNet++, a significant yet missing component in the field of human trajectory forecasting.
arXiv Detail & Related papers (2020-07-07T17:19:56Z) - Mining Implicit Relevance Feedback from User Behavior for Web Question
Answering [92.45607094299181]
We make the first study to explore the correlation between user behavior and passage relevance.
Our approach significantly improves the accuracy of passage ranking without extra human labeled data.
In practice, this work has proved effective to substantially reduce the human labeling cost for the QA service in a global commercial search engine.
arXiv Detail & Related papers (2020-06-13T07:02:08Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.