Zero-Shot Prompting for Implicit Intent Prediction and Recommendation
with Commonsense Reasoning
- URL: http://arxiv.org/abs/2210.05901v2
- Date: Tue, 6 Jun 2023 01:42:47 GMT
- Title: Zero-Shot Prompting for Implicit Intent Prediction and Recommendation
with Commonsense Reasoning
- Authors: Hui-Chi Kuo, Yun-Nung Chen
- Abstract summary: This paper proposes a framework of multi-domain dialogue systems, which can automatically infer implicit intents based on user utterances.
The proposed framework is demonstrated effective to realize implicit intents and recommend associated bots in a zero-shot manner.
- Score: 28.441725610692714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intelligent virtual assistants are currently designed to perform tasks or
services explicitly mentioned by users, so multiple related domains or tasks
need to be performed one by one through a long conversation with many explicit
intents. Instead, human assistants are capable of reasoning (multiple) implicit
intents based on user utterances via commonsense knowledge, reducing complex
interactions and improving practicality. Therefore, this paper proposes a
framework of multi-domain dialogue systems, which can automatically infer
implicit intents based on user utterances and then perform zero-shot prompting
using a large pre-trained language model to trigger suitable single
task-oriented bots. The proposed framework is demonstrated effective to realize
implicit intents and recommend associated bots in a zero-shot manner.
Related papers
- Empowering Whisper as a Joint Multi-Talker and Target-Talker Speech Recognition System [73.34663391495616]
We propose a pioneering approach to tackle joint multi-talker and target-talker speech recognition tasks.
Specifically, we freeze Whisper and plug a Sidecar separator into its encoder to separate mixed embedding for multiple talkers.
We deliver acceptable zero-shot performance on multi-talker ASR on AishellMix Mandarin dataset.
arXiv Detail & Related papers (2024-07-13T09:28:24Z) - Tell Me More! Towards Implicit User Intention Understanding of Language
Model Driven Agents [110.25679611755962]
Current language model-driven agents often lack mechanisms for effective user participation, which is crucial given the vagueness commonly found in user instructions.
We introduce Intention-in-Interaction (IN3), a novel benchmark designed to inspect users' implicit intentions through explicit queries.
We empirically train Mistral-Interact, a powerful model that proactively assesses task vagueness, inquires user intentions, and refines them into actionable goals.
arXiv Detail & Related papers (2024-02-14T14:36:30Z) - IntentDial: An Intent Graph based Multi-Turn Dialogue System with
Reasoning Path Visualization [24.888848712778664]
We present a novel graph-based multi-turn dialogue system called.
It identifies a user's intent by identifying intent elements and a standard query from a graph using reinforcement learning.
arXiv Detail & Related papers (2023-10-18T09:21:37Z) - Towards Large-Scale Interpretable Knowledge Graph Reasoning for Dialogue
Systems [109.16553492049441]
We propose a novel method to incorporate the knowledge reasoning capability into dialogue systems in a more scalable and generalizable manner.
To the best of our knowledge, this is the first work to have transformer models generate responses by reasoning over differentiable knowledge graphs.
arXiv Detail & Related papers (2022-03-20T17:51:49Z) - A Persistent Spatial Semantic Representation for High-level Natural
Language Instruction Execution [54.385344986265714]
We propose a persistent spatial semantic representation method to bridge the gap between language and robot actions.
We evaluate our approach on the ALFRED benchmark and achieve state-of-the-art results, despite completely avoiding the commonly used step-by-step instructions.
arXiv Detail & Related papers (2021-07-12T17:47:19Z) - Continuous representations of intents for dialogue systems [10.031004070657122]
Up until recently the focus has been on detecting a fixed, discrete, number of seen intents.
Recent years have seen some work done on unseen intent detection in the context of zero-shot learning.
This paper proposes a novel model where intents are continuous points placed in a specialist Intent Space.
arXiv Detail & Related papers (2021-05-08T15:08:20Z) - Resolving Intent Ambiguities by Retrieving Discriminative Clarifying
Questions [6.905422991603534]
We propose a novel method of generating discriminative questions using a simple rule based system.
Our approach aims at discrimination between two intents but can be easily extended to clarification over multiple intents.
arXiv Detail & Related papers (2020-08-17T18:11:13Z) - Intent Mining from past conversations for conversational agent [1.9754522186574608]
Bots are increasingly being deployed to provide round-the-clock support and to increase customer engagement.
Many of the commercial bot building frameworks follow a standard approach that requires one to build and train an intent model to recognize a user input.
We have introduced a novel density-based clustering algorithm ITERDB-LabelSCAN for unbalanced data clustering.
arXiv Detail & Related papers (2020-05-22T05:29:13Z) - Learning to Rank Intents in Voice Assistants [2.102846336724103]
We propose a novel Energy-based model for the intent ranking task.
We show our approach outperforms existing state of the art methods by reducing the error-rate by 3.8%.
We also evaluate the robustness of our algorithm on the intent ranking task and show our algorithm improves the robustness by 33.3%.
arXiv Detail & Related papers (2020-04-30T21:51:26Z) - Inferring Temporal Compositions of Actions Using Probabilistic Automata [61.09176771931052]
We propose to express temporal compositions of actions as semantic regular expressions and derive an inference framework using probabilistic automata.
Our approach is different from existing works that either predict long-range complex activities as unordered sets of atomic actions, or retrieve videos using natural language sentences.
arXiv Detail & Related papers (2020-04-28T00:15:26Z) - Efficient Intent Detection with Dual Sentence Encoders [53.16532285820849]
We introduce intent detection methods backed by pretrained dual sentence encoders such as USE and ConveRT.
We demonstrate the usefulness and wide applicability of the proposed intent detectors, showing that they outperform intent detectors based on fine-tuning the full BERT-Large model.
We release our code, as well as a new challenging single-domain intent detection dataset.
arXiv Detail & Related papers (2020-03-10T15:33:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.