A scalable framework for learning from implicit user feedback to improve
natural language understanding in large-scale conversational AI systems
- URL: http://arxiv.org/abs/2010.12251v2
- Date: Fri, 10 Sep 2021 06:42:20 GMT
- Title: A scalable framework for learning from implicit user feedback to improve
natural language understanding in large-scale conversational AI systems
- Authors: Sunghyun Park, Han Li, Ameen Patel, Sidharth Mudgal, Sungjin Lee,
Young-Bum Kim, Spyros Matsoukas, Ruhi Sarikaya
- Abstract summary: Natural Language Understanding (NLU) is responsible for producing semantic understanding of a user request.
We propose a scalable and automatic approach for improving NLU in a large-scale conversational AI system by leveraging implicit user feedback.
We show the results of applying the framework and improving NLU for a large-scale production system and show its impact across 10 domains.
- Score: 36.351794377515745
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Natural Language Understanding (NLU) is an established component within a
conversational AI or digital assistant system, and it is responsible for
producing semantic understanding of a user request. We propose a scalable and
automatic approach for improving NLU in a large-scale conversational AI system
by leveraging implicit user feedback, with an insight that user interaction
data and dialog context have rich information embedded from which user
satisfaction and intention can be inferred. In particular, we propose a general
domain-agnostic framework for curating new supervision data for improving NLU
from live production traffic. With an extensive set of experiments, we show the
results of applying the framework and improving NLU for a large-scale
production system and show its impact across 10 domains.
Related papers
- Enhancing Smart Environments with Context-Aware Chatbots using Large Language Models [1.6672326114795073]
This work presents a novel architecture for context-aware interactions within smart environments, leveraging Large Language Models (LLMs) to enhance user experiences.
Our system integrates user location data obtained through UWB tags and sensor-equipped smart homes with real-time human activity recognition (HAR) to provide a comprehensive understanding of user context.
The results highlight the significant benefits of integrating LLM with real-time activity and location data to deliver personalised and contextually relevant user experiences.
arXiv Detail & Related papers (2025-02-20T11:46:51Z) - Enhancing Discoverability in Enterprise Conversational Systems with Proactive Question Suggestions [5.356008176627551]
This paper proposes a framework to enhance question suggestions in conversational enterprise AI systems.
Our approach combines periodic user intent analysis at the population level with chat session-based question generation.
We evaluate the framework using real-world data from the AI Assistant for Adobe Experience Platform.
arXiv Detail & Related papers (2024-12-14T19:04:16Z) - Unveiling User Preferences: A Knowledge Graph and LLM-Driven Approach for Conversational Recommendation [55.5687800992432]
We propose a plug-and-play framework that synergizes Large Language Models (LLMs) and Knowledge Graphs (KGs) to unveil user preferences.
This enables the LLM to transform KG entities into concise natural language descriptions, allowing them to comprehend domain-specific knowledge.
arXiv Detail & Related papers (2024-11-16T11:47:21Z) - Constraining Participation: Affordances of Feedback Features in Interfaces to Large Language Models [49.74265453289855]
Large language models (LLMs) are now accessible to anyone with a computer, a web browser, and an internet connection via browser-based interfaces.
This paper examines the affordances of interactive feedback features in ChatGPT's interface, analysing how they shape user input and participation in iteration.
arXiv Detail & Related papers (2024-08-27T13:50:37Z) - Interpretable User Satisfaction Estimation for Conversational Systems with Large Language Models [35.95405294377247]
Existing approaches based on featurized ML models or text embeddings fall short in extracting generalizable patterns.
We show that LLMs can extract interpretable signals of user satisfaction from their natural language utterances more effectively than embedding-based approaches.
arXiv Detail & Related papers (2024-03-19T02:57:07Z) - Building Trust in Conversational AI: A Comprehensive Review and Solution
Architecture for Explainable, Privacy-Aware Systems using LLMs and Knowledge
Graph [0.33554367023486936]
We introduce a comprehensive tool that provides an in-depth review of over 150 Large Language Models (LLMs)
Building on this foundation, we propose a novel functional architecture that seamlessly integrates the structured dynamics of Knowledge Graphs with the linguistic capabilities of LLMs.
Our architecture adeptly blends linguistic sophistication with factual rigour and further strengthens data security through Role-Based Access Control.
arXiv Detail & Related papers (2023-08-13T22:47:51Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - Leveraging Large Language Models in Conversational Recommender Systems [9.751217336860924]
A Conversational Recommender System (CRS) offers increased transparency and control to users by enabling them to engage with the system through a real-time multi-turn dialogue.
Large Language Models (LLMs) have exhibited an unprecedented ability to converse naturally and incorporate world knowledge and common-sense reasoning into language understanding.
arXiv Detail & Related papers (2023-05-13T16:40:07Z) - NLU++: A Multi-Label, Slot-Rich, Generalisable Dataset for Natural
Language Understanding in Task-Oriented Dialogue [53.54788957697192]
NLU++ is a novel dataset for natural language understanding (NLU) in task-oriented dialogue (ToD) systems.
NLU++ is divided into two domains (BANKING and HOTELS) and brings several crucial improvements over current commonly used NLU datasets.
arXiv Detail & Related papers (2022-04-27T16:00:23Z) - An Adversarial Learning based Multi-Step Spoken Language Understanding
System through Human-Computer Interaction [70.25183730482915]
We introduce a novel multi-step spoken language understanding system based on adversarial learning.
We demonstrate that the new system can improve parsing performance by at least $2.5%$ in terms of F1.
arXiv Detail & Related papers (2021-06-06T03:46:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.