Dialog Policy Learning for Joint Clarification and Active Learning
Queries
- URL: http://arxiv.org/abs/2006.05456v3
- Date: Mon, 14 Dec 2020 03:31:36 GMT
- Title: Dialog Policy Learning for Joint Clarification and Active Learning
Queries
- Authors: Aishwarya Padmakumar and Raymond J. Mooney
- Abstract summary: We train a hierarchical dialog policy to jointly perform both clarification and active learning.
We show that jointly learning dialog policies for clarification and active learning is more effective than the use of static dialog policies for one or both of these functions.
- Score: 24.420113907842147
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Intelligent systems need to be able to recover from mistakes, resolve
uncertainty, and adapt to novel concepts not seen during training. Dialog
interaction can enable this by the use of clarifications for correction and
resolving uncertainty, and active learning queries to learn new concepts
encountered during operation. Prior work on dialog systems has either focused
on exclusively learning how to perform clarification/ information seeking, or
to perform active learning. In this work, we train a hierarchical dialog policy
to jointly perform both clarification and active learning in the context of an
interactive language-based image retrieval task motivated by an online shopping
application, and demonstrate that jointly learning dialog policies for
clarification and active learning is more effective than the use of static
dialog policies for one or both of these functions.
Related papers
- Few-shot Dialogue Strategy Learning for Motivational Interviewing via Inductive Reasoning [21.078032718892498]
We consider the task of building a dialogue system that can motivate users to adopt positive lifestyle changes: Motivational Interviewing.
We propose DIIT, a framework that is capable of learning and applying conversation strategies in the form of natural language inductive rules from expert demonstrations.
arXiv Detail & Related papers (2024-03-23T06:03:37Z) - Plug-and-Play Policy Planner for Large Language Model Powered Dialogue
Agents [121.46051697742608]
We introduce a new dialogue policy planning paradigm to strategize dialogue problems with a tunable language model plug-in named PPDPP.
Specifically, we develop a novel training framework to facilitate supervised fine-tuning over available human-annotated data.
PPDPP consistently and substantially outperforms existing approaches on three different proactive dialogue applications.
arXiv Detail & Related papers (2023-11-01T03:20:16Z) - Self-Explanation Prompting Improves Dialogue Understanding in Large
Language Models [52.24756457516834]
We propose a novel "Self-Explanation" prompting strategy to enhance the comprehension abilities of Large Language Models (LLMs)
This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks.
Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts.
arXiv Detail & Related papers (2023-09-22T15:41:34Z) - KETOD: Knowledge-Enriched Task-Oriented Dialogue [77.59814785157877]
Existing studies in dialogue system research mostly treat task-oriented dialogue and chit-chat as separate domains.
We investigate how task-oriented dialogue and knowledge-grounded chit-chat can be effectively integrated into a single model.
arXiv Detail & Related papers (2022-05-11T16:01:03Z) - Utterance Rewriting with Contrastive Learning in Multi-turn Dialogue [22.103162555263143]
We introduce contrastive learning and multi-task learning to jointly model the problem.
Our proposed model achieves state-of-the-art performance on several public datasets.
arXiv Detail & Related papers (2022-03-22T10:13:27Z) - Hierarchical Inductive Transfer for Continual Dialogue Learning [32.35720663518357]
We propose a hierarchical inductive transfer framework to learn and deploy the dialogue skills continually and efficiently.
As the only trainable module, it is beneficial for the dialogue system on the embedded devices to acquire new dialogue skills with negligible additional parameters.
arXiv Detail & Related papers (2022-03-20T08:06:44Z) - Continual Learning in Task-Oriented Dialogue Systems [49.35627673523519]
Continual learning in task-oriented dialogue systems can allow us to add new domains and functionalities through time without incurring the high cost of a whole system retraining.
We propose a continual learning benchmark for task-oriented dialogue systems with 37 domains to be learned continuously in four settings.
arXiv Detail & Related papers (2020-12-31T08:44:25Z) - Rethinking Supervised Learning and Reinforcement Learning in
Task-Oriented Dialogue Systems [58.724629408229205]
We demonstrate how traditional supervised learning and a simulator-free adversarial learning method can be used to achieve performance comparable to state-of-the-art RL-based methods.
Our main goal is not to beat reinforcement learning with supervised learning, but to demonstrate the value of rethinking the role of reinforcement learning and supervised learning in optimizing task-oriented dialogue systems.
arXiv Detail & Related papers (2020-09-21T12:04:18Z) - Dialog as a Vehicle for Lifelong Learning [24.420113907842147]
We present the problem of designing dialog systems that enable lifelong learning.
We include examples of prior work in this direction, and discuss challenges that remain to be addressed.
arXiv Detail & Related papers (2020-06-26T03:08:33Z) - Adaptive Dialog Policy Learning with Hindsight and User Modeling [10.088347529930129]
We develop algorithm LHUA that, for the first time, enables dialog agents to adaptively learn with hindsight from both simulated and real users.
Experimental results suggest that, in success rate and policy quality, LHUA outperforms competitive baselines from the literature.
arXiv Detail & Related papers (2020-05-07T07:43:43Z) - Masking Orchestration: Multi-task Pretraining for Multi-role Dialogue
Representation Learning [50.5572111079898]
Multi-role dialogue understanding comprises a wide range of diverse tasks such as question answering, act classification, dialogue summarization etc.
While dialogue corpora are abundantly available, labeled data, for specific learning tasks, can be highly scarce and expensive.
In this work, we investigate dialogue context representation learning with various types unsupervised pretraining tasks.
arXiv Detail & Related papers (2020-02-27T04:36:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.