Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue
Questions with LLMs
- URL: http://arxiv.org/abs/2305.11792v2
- Date: Sun, 15 Oct 2023 12:54:48 GMT
- Title: Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue
Questions with LLMs
- Authors: Hongru Wang, Rui Wang, Fei Mi, Yang Deng, Zezhong Wang, Bin Liang,
Ruifeng Xu, Kam-Fai Wong
- Abstract summary: We propose a novel linguistic cue-based chain-of-thoughts (textitCue-CoT) to provide a more personalized and engaging response.
We build a benchmark with in-depth dialogue questions, consisting of 6 datasets in both Chinese and English.
Empirical results demonstrate our proposed textitCue-CoT method outperforms standard prompting methods in terms of both textithelpfulness and textitacceptability on all datasets.
- Score: 59.74002011562726
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large Language Models (LLMs), such as \texttt{ChatGPT}, greatly empower
dialogue systems with strong language understanding and generation
capabilities. However, most of the previous works prompt the LLMs to directly
generate a response based on the dialogue context, overlooking the underlying
linguistic cues about the user status exhibited in the context. Such in-depth
dialogue scenarios are challenging for existing LLMs to figure out the user's
hidden needs and respond satisfactorily through a single-step inference. To
this end, we propose a novel linguistic cue-based chain-of-thoughts
(\textit{Cue}-CoT), which enhances the LLMs inference with an intermediate
reasoning step to find cues exhibited in the dialogue, aiming to provide a more
personalized and engaging response. To evaluate the approach, we build a
benchmark with in-depth dialogue questions, consisting of 6 datasets in both
Chinese and English, targeting 3 major linguistic cues during the conversation:
\textit{personality}, \textit{emotion}, and \textit{psychology}. We conduct
extensive experiments on the proposed benchmark with 5 LLMs under both
zero-shot and one-shot settings. Empirical results demonstrate our proposed
\textit{Cue}-CoT method outperforms standard prompting methods in terms of both
\textit{helpfulness} and \textit{acceptability} on all datasets.
Related papers
- Selective Prompting Tuning for Personalized Conversations with LLMs [31.28284591597932]
We propose textbfSelective textbfPrompt textbfTuning (SPT), which softly prompts large language models (LLMs) for personalized conversations in a selective way.
SPT significantly enhances response diversity by up to 90%, along with improvements in other critical performance indicators.
arXiv Detail & Related papers (2024-06-26T09:03:52Z) - Can LLMs Understand the Implication of Emphasized Sentences in Dialogue? [64.72966061510375]
Emphasis is a crucial component in human communication, which indicates the speaker's intention and implication beyond pure text in dialogue.
This paper introduces Emphasized-Talk, a benchmark with emphasis-annotated dialogue samples capturing the implications of emphasis.
We evaluate various Large Language Models (LLMs), both open-source and commercial, to measure their performance in understanding emphasis.
arXiv Detail & Related papers (2024-06-16T20:41:44Z) - Interactive Text-to-Image Retrieval with Large Language Models: A Plug-and-Play Approach [33.231639257323536]
In this paper, we address the issue of dialogue-form context query within the interactive text-to-image retrieval task.
By reformulating the dialogue-form context, we eliminate the necessity of fine-tuning a retrieval model on existing visual dialogue data.
We construct the LLM questioner to generate non-redundant questions about the attributes of the target image.
arXiv Detail & Related papers (2024-06-05T16:09:01Z) - Self-Explanation Prompting Improves Dialogue Understanding in Large
Language Models [52.24756457516834]
We propose a novel "Self-Explanation" prompting strategy to enhance the comprehension abilities of Large Language Models (LLMs)
This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks.
Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts.
arXiv Detail & Related papers (2023-09-22T15:41:34Z) - SSP: Self-Supervised Post-training for Conversational Search [63.28684982954115]
We propose fullmodel (model) which is a new post-training paradigm with three self-supervised tasks to efficiently initialize the conversational search model.
To verify the effectiveness of our proposed method, we apply the conversational encoder post-trained by model on the conversational search task using two benchmark datasets: CAsT-19 and CAsT-20.
arXiv Detail & Related papers (2023-07-02T13:36:36Z) - Prompting and Evaluating Large Language Models for Proactive Dialogues:
Clarification, Target-guided, and Non-collaboration [72.04629217161656]
This work focuses on three aspects of proactive dialogue systems: clarification, target-guided, and non-collaborative dialogues.
To trigger the proactivity of LLMs, we propose the Proactive Chain-of-Thought prompting scheme.
arXiv Detail & Related papers (2023-05-23T02:49:35Z) - Contextual Dynamic Prompting for Response Generation in Task-oriented
Dialog Systems [8.419582942080927]
Response generation is one of the critical components in task-oriented dialog systems.
We propose an approach that performs textit dynamic prompting where the prompts are learnt from dialog contexts.
We show that contextual dynamic prompts improve response generation in terms of textit combined score citemehri-etal 2019-structured by 3 absolute points.
arXiv Detail & Related papers (2023-01-30T20:26:02Z) - GRASP: Guiding model with RelAtional Semantics using Prompt [3.1275060062551208]
We propose a Guiding model with RelAtional Semantics using Prompt (GRASP)
We adopt a prompt-based fine-tuning approach and capture relational semantic clues of a given dialogue with an argument-aware prompt marker strategy.
In the experiments, GRASP state-of-the-art performance in terms of both F1 and F1c scores on a DialogRE dataset.
arXiv Detail & Related papers (2022-08-26T08:19:28Z) - In-Context Learning for Few-Shot Dialogue State Tracking [55.91832381893181]
We propose an in-context (IC) learning framework for few-shot dialogue state tracking (DST)
A large pre-trained language model (LM) takes a test instance and a few annotated examples as input, and directly decodes the dialogue states without any parameter updates.
This makes the LM more flexible and scalable compared to prior few-shot DST work when adapting to new domains and scenarios.
arXiv Detail & Related papers (2022-03-16T11:58:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.