Prompting and Evaluating Large Language Models for Proactive Dialogues:
Clarification, Target-guided, and Non-collaboration
- URL: http://arxiv.org/abs/2305.13626v2
- Date: Sat, 14 Oct 2023 06:36:05 GMT
- Title: Prompting and Evaluating Large Language Models for Proactive Dialogues:
Clarification, Target-guided, and Non-collaboration
- Authors: Yang Deng, Lizi Liao, Liang Chen, Hongru Wang, Wenqiang Lei, Tat-Seng
Chua
- Abstract summary: This work focuses on three aspects of proactive dialogue systems: clarification, target-guided, and non-collaborative dialogues.
To trigger the proactivity of LLMs, we propose the Proactive Chain-of-Thought prompting scheme.
- Score: 72.04629217161656
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conversational systems based on Large Language Models (LLMs), such as
ChatGPT, show exceptional proficiency in context understanding and response
generation. However, despite their impressive capabilities, they still possess
limitations, such as providing randomly-guessed answers to ambiguous queries or
failing to refuse users' requests, both of which are considered aspects of a
conversational agent's proactivity. This raises the question of whether
LLM-based conversational systems are equipped to handle proactive dialogue
problems. In this work, we conduct a comprehensive analysis of LLM-based
conversational systems, specifically focusing on three aspects of proactive
dialogue systems: clarification, target-guided, and non-collaborative
dialogues. To trigger the proactivity of LLMs, we propose the Proactive
Chain-of-Thought prompting scheme, which augments LLMs with the goal planning
capability over descriptive reasoning chains. Empirical findings are discussed
to promote future studies on LLM-based proactive dialogue systems.
Related papers
- Learning to Clarify: Multi-turn Conversations with Action-Based Contrastive Self-Training [33.57497419019826]
Action-Based Contrastive Self-Training allows for sample-efficient dialogue policy learning in multi-turn conversation.
ACT demonstrates substantial conversation modeling improvements over standard approaches to supervised fine-tuning and DPO.
arXiv Detail & Related papers (2024-05-31T22:44:48Z) - A Survey on Recent Advances in LLM-Based Multi-turn Dialogue Systems [12.999001024463453]
This paper aims to give a summary of existing LLMs and approaches for adapting LLMs to downstream tasks.
It elaborates recent advances in multi-turn dialogue systems, covering both LLM-based open-domain dialogue (ODD) and task-oriented dialogue (TOD) systems.
arXiv Detail & Related papers (2024-02-28T03:16:44Z) - Reasoning in Conversation: Solving Subjective Tasks through Dialogue
Simulation for Large Language Models [56.93074140619464]
We propose RiC (Reasoning in Conversation), a method that focuses on solving subjective tasks through dialogue simulation.
The motivation of RiC is to mine useful contextual information by simulating dialogues instead of supplying chain-of-thought style rationales.
We evaluate both API-based and open-source LLMs including GPT-4, ChatGPT, and OpenChat across twelve tasks.
arXiv Detail & Related papers (2024-02-27T05:37:10Z) - Are LLMs Effective Negotiators? Systematic Evaluation of the Multifaceted Capabilities of LLMs in Negotiation Dialogues [4.738985706520995]
This work aims to systematically analyze the multifaceted capabilities of LLMs across diverse dialogue scenarios.
Our analysis highlights GPT-4's superior performance in many tasks while identifying specific challenges.
arXiv Detail & Related papers (2024-02-21T06:11:03Z) - Plug-and-Play Policy Planner for Large Language Model Powered Dialogue
Agents [121.46051697742608]
We introduce a new dialogue policy planning paradigm to strategize dialogue problems with a tunable language model plug-in named PPDPP.
Specifically, we develop a novel training framework to facilitate supervised fine-tuning over available human-annotated data.
PPDPP consistently and substantially outperforms existing approaches on three different proactive dialogue applications.
arXiv Detail & Related papers (2023-11-01T03:20:16Z) - Self-Explanation Prompting Improves Dialogue Understanding in Large
Language Models [52.24756457516834]
We propose a novel "Self-Explanation" prompting strategy to enhance the comprehension abilities of Large Language Models (LLMs)
This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks.
Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts.
arXiv Detail & Related papers (2023-09-22T15:41:34Z) - Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue
Questions with LLMs [59.74002011562726]
We propose a novel linguistic cue-based chain-of-thoughts (textitCue-CoT) to provide a more personalized and engaging response.
We build a benchmark with in-depth dialogue questions, consisting of 6 datasets in both Chinese and English.
Empirical results demonstrate our proposed textitCue-CoT method outperforms standard prompting methods in terms of both textithelpfulness and textitacceptability on all datasets.
arXiv Detail & Related papers (2023-05-19T16:27:43Z) - A Survey on Proactive Dialogue Systems: Problems, Methods, and Prospects [100.75759050696355]
We provide a comprehensive overview of the prominent problems and advanced designs for conversational agent's proactivity in different types of dialogues.
We discuss challenges that meet the real-world application needs but require a greater research focus in the future.
arXiv Detail & Related papers (2023-05-04T11:38:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.