A system for Human-AI collaboration for Online Customer Support
- URL: http://arxiv.org/abs/2301.12158v1
- Date: Sat, 28 Jan 2023 11:07:23 GMT
- Title: A system for Human-AI collaboration for Online Customer Support
- Authors: Debayan Banerjee, Mathis Poser, Christina Wiethof, Varun Shankar
Subramanian, Richard Paucar, Eva A. C. Bittner, Chris Biemann
- Abstract summary: We present a system where a human support agent collaborates in real-time with an AI agent to satisfactorily answer customer queries.
We describe the user interaction elements of the solution, along with the machine learning techniques involved in the AI agent.
- Score: 16.22226476879187
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI enabled chat bots have recently been put to use to answer customer service
queries, however it is a common feedback of users that bots lack a personal
touch and are often unable to understand the real intent of the user's
question. To this end, it is desirable to have human involvement in the
customer servicing process. In this work, we present a system where a human
support agent collaborates in real-time with an AI agent to satisfactorily
answer customer queries. We describe the user interaction elements of the
solution, along with the machine learning techniques involved in the AI agent.
Related papers
- YETI (YET to Intervene) Proactive Interventions by Multimodal AI Agents in Augmented Reality Tasks [16.443149180969776]
Augmented Reality (AR) head worn devices can uniquely improve the user experience of solving procedural day-to-day tasks.
Such AR capabilities can help AI Agents see and listen to actions that users take which can relate to multimodal capabilities of human users.
Proactivity of AI Agents on the other hand can help the human user detect and correct any mistakes in agent observed tasks.
arXiv Detail & Related papers (2025-01-16T08:06:02Z) - Enhancing Discoverability in Enterprise Conversational Systems with Proactive Question Suggestions [5.356008176627551]
This paper proposes a framework to enhance question suggestions in conversational enterprise AI systems.
Our approach combines periodic user intent analysis at the population level with chat session-based question generation.
We evaluate the framework using real-world data from the AI Assistant for Adobe Experience Platform.
arXiv Detail & Related papers (2024-12-14T19:04:16Z) - "Ask Me Anything": How Comcast Uses LLMs to Assist Agents in Real Time [9.497432249460385]
We introduce "Ask Me Anything" (AMA) as an add-on feature to an agent-facing customer service interface.
AMA allows agents to ask questions to a large language model (LLM) on demand, as they are handling customer conversations.
We find that agents using AMA versus a traditional search experience spend approximately 10% fewer seconds per conversation containing a search.
arXiv Detail & Related papers (2024-05-01T18:31:36Z) - Beyond Static Evaluation: A Dynamic Approach to Assessing AI Assistants' API Invocation Capabilities [48.922660354417204]
We propose Automated Dynamic Evaluation (AutoDE) to assess an assistant's API call capability without human involvement.
In our framework, we endeavor to closely mirror genuine human conversation patterns in human-machine interactions.
arXiv Detail & Related papers (2024-03-17T07:34:12Z) - ChoiceMates: Supporting Unfamiliar Online Decision-Making with Multi-Agent Conversational Interactions [53.07022684941739]
We present ChoiceMates, an interactive multi-agent system designed to address these needs.
Unlike existing multi-agent systems that automate tasks with agents, the user orchestrates agents to assist their decision-making process.
arXiv Detail & Related papers (2023-10-02T16:49:39Z) - Improving Grounded Language Understanding in a Collaborative Environment
by Interacting with Agents Through Help Feedback [42.19685958922537]
We argue that human-AI collaboration should be interactive, with humans monitoring the work of AI agents and providing feedback that the agent can understand and utilize.
In this work, we explore these directions using the challenging task defined by the IGLU competition, an interactive grounded language understanding task in a MineCraft-like world.
arXiv Detail & Related papers (2023-04-21T05:37:59Z) - Intent Recognition in Conversational Recommender Systems [0.0]
We introduce a pipeline to contextualize the input utterances in conversations.
We then take the next step towards leveraging reverse feature engineering to link the contextualized input and learning model to support intent recognition.
arXiv Detail & Related papers (2022-12-06T11:02:42Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Watch-And-Help: A Challenge for Social Perception and Human-AI
Collaboration [116.28433607265573]
We introduce Watch-And-Help (WAH), a challenge for testing social intelligence in AI agents.
In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently.
We build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines.
arXiv Detail & Related papers (2020-10-19T21:48:31Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.