Meet Your Favorite Character: Open-domain Chatbot Mimicking Fictional
Characters with only a Few Utterances
- URL: http://arxiv.org/abs/2204.10825v1
- Date: Fri, 22 Apr 2022 17:11:17 GMT
- Title: Meet Your Favorite Character: Open-domain Chatbot Mimicking Fictional
Characters with only a Few Utterances
- Authors: Seungju Han, Beomsu Kim, Jin Yong Yoo, Seokjun Seo, Sangbum Kim,
Enkhbayar Erdenee, Buru Chang
- Abstract summary: We present a new practical task where only a few utterances of each fictional character are available to generate responses mimicking them.
We propose a new method named Pseudo Dialog Prompting ( PDP) that generates responses by leveraging the power of large-scale language models.
- Score: 23.219930429306352
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider mimicking fictional characters as a promising
direction for building engaging conversation models. To this end, we present a
new practical task where only a few utterances of each fictional character are
available to generate responses mimicking them. Furthermore, we propose a new
method named Pseudo Dialog Prompting (PDP) that generates responses by
leveraging the power of large-scale language models with prompts containing the
target character's utterances. To better reflect the style of the character,
PDP builds the prompts in the form of dialog that includes the character's
utterances as dialog history. Since only utterances of the characters are
available in the proposed task, PDP matches each utterance with an appropriate
pseudo-context from a predefined set of context candidates using a retrieval
model. Through human and automatic evaluation, we show that PDP generates
responses that better reflect the style of fictional characters than baseline
methods.
Related papers
- Selective Prompting Tuning for Personalized Conversations with LLMs [31.28284591597932]
We propose textbfSelective textbfPrompt textbfTuning (SPT), which softly prompts large language models (LLMs) for personalized conversations in a selective way.
SPT significantly enhances response diversity by up to 90%, along with improvements in other critical performance indicators.
arXiv Detail & Related papers (2024-06-26T09:03:52Z) - CHIRON: Rich Character Representations in Long-Form Narratives [98.273323001781]
We propose CHIRON, a new character sheet' based representation that organizes and filters textual information about characters.
We validate CHIRON via the downstream task of masked-character prediction, where our experiments show CHIRON is better and more flexible than comparable summary-based baselines.
metrics derived from CHIRON can be used to automatically infer character-centricity in stories, and that these metrics align with human judgments.
arXiv Detail & Related papers (2024-06-14T17:23:57Z) - Attribute Controlled Dialogue Prompting [31.09791656949115]
We present a novel, instance-specific prompt-tuning algorithm for dialogue generation.
Our method is superior to prompting baselines and comparable to fine-tuning with only 5%-6% of total parameters.
arXiv Detail & Related papers (2023-07-11T12:48:55Z) - Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue
Questions with LLMs [59.74002011562726]
We propose a novel linguistic cue-based chain-of-thoughts (textitCue-CoT) to provide a more personalized and engaging response.
We build a benchmark with in-depth dialogue questions, consisting of 6 datasets in both Chinese and English.
Empirical results demonstrate our proposed textitCue-CoT method outperforms standard prompting methods in terms of both textithelpfulness and textitacceptability on all datasets.
arXiv Detail & Related papers (2023-05-19T16:27:43Z) - Contextual Dynamic Prompting for Response Generation in Task-oriented
Dialog Systems [8.419582942080927]
Response generation is one of the critical components in task-oriented dialog systems.
We propose an approach that performs textit dynamic prompting where the prompts are learnt from dialog contexts.
We show that contextual dynamic prompts improve response generation in terms of textit combined score citemehri-etal 2019-structured by 3 absolute points.
arXiv Detail & Related papers (2023-01-30T20:26:02Z) - DIONYSUS: A Pre-trained Model for Low-Resource Dialogue Summarization [127.714919036388]
DIONYSUS is a pre-trained encoder-decoder model for summarizing dialogues in any new domain.
Our experiments show that DIONYSUS outperforms existing methods on six datasets.
arXiv Detail & Related papers (2022-12-20T06:21:21Z) - Large Language Models Meet Harry Potter: A Bilingual Dataset for
Aligning Dialogue Agents with Characters [70.84938803753062]
We introduce the Harry Potter Dialogue dataset, designed to advance the study of dialogue agents and character alignment.
The dataset encompasses all dialogue sessions (in both English and Chinese) from the Harry Potter series.
It is annotated with vital background information, including dialogue scenes, speakers, character relationships, and attributes.
arXiv Detail & Related papers (2022-11-13T10:16:39Z) - Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots [62.295373408415365]
We propose a personalized hybrid matching network (PHMN) for context-response matching.
Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information.
We evaluate our model on two large datasets with user identification, i.e., personalized dialogue Corpus Ubuntu (P- Ubuntu) and personalized Weibo dataset (P-Weibo)
arXiv Detail & Related papers (2021-03-17T09:42:11Z) - Prototype-to-Style: Dialogue Generation with Style-Aware Editing on
Retrieval Memory [65.98002918470543]
We introduce a new prototype-to-style framework to tackle the challenge of stylistic dialogue generation.
The framework uses an Information Retrieval (IR) system and extracts a response prototype from the retrieved response.
A stylistic response generator then takes the prototype and the desired language style as model input to obtain a high-quality and stylistic response.
arXiv Detail & Related papers (2020-04-05T14:36:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.