InstructTODS: Large Language Models for End-to-End Task-Oriented
Dialogue Systems
- URL: http://arxiv.org/abs/2310.08885v1
- Date: Fri, 13 Oct 2023 06:36:26 GMT
- Title: InstructTODS: Large Language Models for End-to-End Task-Oriented
Dialogue Systems
- Authors: Willy Chung, Samuel Cahyawijaya, Bryan Wilie, Holy Lovenia, Pascale
Fung
- Abstract summary: Large language models (LLMs) have been used for diverse tasks in natural language processing (NLP)
We present InstructTODS, a novel framework for zero-shot end-to-end task-oriented dialogue systems.
InstructTODS generates a proxy belief state that seamlessly translates user intentions into dynamic queries.
- Score: 60.53276524369498
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Large language models (LLMs) have been used for diverse tasks in natural
language processing (NLP), yet remain under-explored for task-oriented dialogue
systems (TODS), especially for end-to-end TODS. We present InstructTODS, a
novel off-the-shelf framework for zero-shot end-to-end task-oriented dialogue
systems that can adapt to diverse domains without fine-tuning. By leveraging
LLMs, InstructTODS generates a proxy belief state that seamlessly translates
user intentions into dynamic queries for efficient interaction with any KB. Our
extensive experiments demonstrate that InstructTODS achieves comparable
performance to fully fine-tuned TODS in guiding dialogues to successful
completion without prior knowledge or task-specific data. Furthermore, a
rigorous human evaluation of end-to-end TODS shows that InstructTODS produces
dialogue responses that notably outperform both the gold responses and the
state-of-the-art TODS in terms of helpfulness, informativeness, and humanness.
Moreover, the effectiveness of LLMs in TODS is further supported by our
comprehensive evaluations on TODS subtasks: dialogue state tracking, intent
classification, and response generation. Code and implementations could be
found here https://github.com/WillyHC22/InstructTODS/
Related papers
- HierTOD: A Task-Oriented Dialogue System Driven by Hierarchical Goals [4.630232280155836]
Task-Oriented Dialogue (TOD) systems assist users in completing tasks through natural language interactions.
In this work, we introduce HierTOD, an enterprise TOD system driven by hierarchical goals and can support composite.
Our system implementation unifies two TOD paradigms: slot-filling for information collection and step-by-step guidance for task execution.
arXiv Detail & Related papers (2024-11-11T17:28:19Z) - Training Zero-Shot Generalizable End-to-End Task-Oriented Dialog System Without Turn-level Dialog Annotations [2.757798192967912]
This work employs multi-task instruction fine-tuning to create more efficient and scalable task-oriented dialogue systems.
Our approach outperforms both state-of-the-art models trained on annotated data and billion-scale parameter off-the-shelf ChatGPT models.
arXiv Detail & Related papers (2024-07-21T04:52:38Z) - A Systematic Study of Performance Disparities in Multilingual
Task-Oriented Dialogue Systems [68.76102493999134]
We take stock of and empirically analyse task performance disparities that exist between multilingual task-oriented dialogue systems.
We prove the existence of the adaptation and intrinsic biases in current ToD systems.
Our analyses offer practical tips on how to approach ToD data collection and system development for new languages.
arXiv Detail & Related papers (2023-10-19T16:41:44Z) - Self-Explanation Prompting Improves Dialogue Understanding in Large
Language Models [52.24756457516834]
We propose a novel "Self-Explanation" prompting strategy to enhance the comprehension abilities of Large Language Models (LLMs)
This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks.
Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts.
arXiv Detail & Related papers (2023-09-22T15:41:34Z) - DiactTOD: Learning Generalizable Latent Dialogue Acts for Controllable
Task-Oriented Dialogue Systems [15.087619144902776]
We present a novel end-to-end latent dialogue act model (DiactTOD) that represents dialogue acts in a latent space.
When pre-trained on a large corpus, DiactTOD is able to predict and control dialogue acts to generate controllable responses.
arXiv Detail & Related papers (2023-08-01T23:29:16Z) - Knowledge-Retrieval Task-Oriented Dialog Systems with Semi-Supervision [22.249113574918034]
Most existing task-oriented dialog (TOD) systems track dialog states in terms of slots and values and use them to query a database to get relevant knowledge to generate responses.
In real-life applications, user utterances are noisier, and thus it is more difficult to accurately track dialog states and correctly secure relevant knowledge.
Inspired by such progress, we propose a retrieval-based method to enhance knowledge selection in TOD systems, which outperforms the traditional database query method for real-life dialogs.
arXiv Detail & Related papers (2023-05-22T16:29:20Z) - Enhancing Task Bot Engagement with Synthesized Open-Domain Dialog [89.35658776144638]
It is essential to build a system that can handle both TOD and ODD and access different knowledge sources.
We propose a framework for automatically generating dialogues that combine knowledge-grounded ODDs and TODs in various settings.
We introduce a unified model PivotBot that is capable of appropriately adopting TOD and ODD modes and accessing different knowledge sources.
arXiv Detail & Related papers (2022-12-20T05:51:47Z) - Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System [26.837972034630003]
PPTOD is a unified plug-and-play model for task-oriented dialogue.
We extensively test our model on three benchmark TOD tasks, including end-to-end dialogue modelling, dialogue state tracking, and intent classification.
arXiv Detail & Related papers (2021-09-29T22:02:18Z) - CINS: Comprehensive Instruction for Few-shot Learning in Task-oriented
Dialog Systems [56.302581679816775]
This paper proposes Comprehensive Instruction (CINS) that exploits PLMs with task-specific instructions.
We design a schema (definition, constraint, prompt) of instructions and their customized realizations for three important downstream tasks in ToD.
Experiments are conducted on these ToD tasks in realistic few-shot learning scenarios with small validation data.
arXiv Detail & Related papers (2021-09-10T03:23:06Z) - TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented
Dialogue [113.45485470103762]
In this work, we unify nine human-human and multi-turn task-oriented dialogue datasets for language modeling.
To better model dialogue behavior during pre-training, we incorporate user and system tokens into the masked language modeling.
arXiv Detail & Related papers (2020-04-15T04:09:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.