Grounding in social media: An approach to building a chit-chat dialogue
model
- URL: http://arxiv.org/abs/2206.05696v1
- Date: Sun, 12 Jun 2022 09:01:57 GMT
- Title: Grounding in social media: An approach to building a chit-chat dialogue
model
- Authors: Ritvik Choudhary, Daisuke Kawahara
- Abstract summary: Building open-domain dialogue systems capable of rich human-like conversational ability is one of the fundamental challenges in language generation.
Current work on knowledge-grounded dialogue generation primarily focuses on persona incorporation or searching a fact-based structured knowledge source such as Wikipedia.
Our method takes a broader and simpler approach, which aims to improve the raw conversation ability of the system by mimicking the human response behavior on social media.
- Score: 9.247397520986999
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Building open-domain dialogue systems capable of rich human-like
conversational ability is one of the fundamental challenges in language
generation. However, even with recent advancements in the field, existing
open-domain generative models fail to capture and utilize external knowledge,
leading to repetitive or generic responses to unseen utterances. Current work
on knowledge-grounded dialogue generation primarily focuses on persona
incorporation or searching a fact-based structured knowledge source such as
Wikipedia. Our method takes a broader and simpler approach, which aims to
improve the raw conversation ability of the system by mimicking the human
response behavior through casual interactions found on social media. Utilizing
a joint retriever-generator setup, the model queries a large set of filtered
comment data from Reddit to act as additional context for the seq2seq
generator. Automatic and human evaluations on open-domain dialogue datasets
demonstrate the effectiveness of our approach.
Related papers
- A Static and Dynamic Attention Framework for Multi Turn Dialogue Generation [37.79563028123686]
In open domain multi turn dialogue generation, it is essential to modeling the contextual semantics of the dialogue history.
Previous research had verified the effectiveness of the hierarchical recurrent encoder-decoder framework on open domain multi turn dialogue generation.
We propose a static and dynamic attention-based approach to model the dialogue history and then generate open domain multi turn dialogue responses.
arXiv Detail & Related papers (2024-10-28T06:05:34Z) - Social Commonsense-Guided Search Query Generation for Open-Domain
Knowledge-Powered Conversations [66.16863141262506]
We present a novel approach that focuses on generating internet search queries guided by social commonsense.
Our proposed framework addresses passive user interactions by integrating topic tracking, commonsense response generation and instruction-driven query generation.
arXiv Detail & Related papers (2023-10-22T16:14:56Z) - ChatPLUG: Open-Domain Generative Dialogue System with Internet-Augmented
Instruction Tuning for Digital Human [76.62897301298699]
ChatPLUG is a Chinese open-domain dialogue system for digital human applications that instruction finetunes on a wide range of dialogue tasks in a unified internet-augmented format.
We show that modelname outperforms state-of-the-art Chinese dialogue systems on both automatic and human evaluation.
We deploy modelname to real-world applications such as Smart Speaker and Instant Message applications with fast inference.
arXiv Detail & Related papers (2023-04-16T18:16:35Z) - PK-Chat: Pointer Network Guided Knowledge Driven Generative Dialogue
Model [79.64376762489164]
PK-Chat is a Pointer network guided generative dialogue model, incorporating a unified pretrained language model and a pointer network over knowledge graphs.
The words generated by PK-Chat in the dialogue are derived from the prediction of word lists and the direct prediction of the external knowledge graph knowledge.
Based on the PK-Chat, a dialogue system is built for academic scenarios in the case of geosciences.
arXiv Detail & Related papers (2023-04-02T18:23:13Z) - Knowledge-Grounded Conversational Data Augmentation with Generative
Conversational Networks [76.11480953550013]
We take a step towards automatically generating conversational data using Generative Conversational Networks.
We evaluate our approach on conversations with and without knowledge on the Topical Chat dataset.
arXiv Detail & Related papers (2022-07-22T22:37:14Z) - GODEL: Large-Scale Pre-Training for Goal-Directed Dialog [119.1397031992088]
We introduce GODEL, a large pre-trained language model for dialog.
We show that GODEL outperforms state-of-the-art pre-trained dialog models in few-shot fine-tuning setups.
A novel feature of our evaluation methodology is the introduction of a notion of utility that assesses the usefulness of responses.
arXiv Detail & Related papers (2022-06-22T18:19:32Z) - Commonsense-Focused Dialogues for Response Generation: An Empirical
Study [39.49727190159279]
We present an empirical study of commonsense in dialogue response generation.
We first auto-extract commonsensical dialogues from existing dialogue datasets by leveraging ConceptNet.
We then collect a new dialogue dataset with 25K dialogues aimed at exhibiting social commonsense in an interactive setting.
arXiv Detail & Related papers (2021-09-14T04:32:09Z) - Viola: A Topic Agnostic Generate-and-Rank Dialogue System [14.896200668918583]
We present Viola, an open-domain dialogue system for spoken conversation.
Viola fetches a batch of response candidates from various neural dialogue models.
Viola's response ranker is a fine-tuned polyencoder that chooses the best response given the dialogue history.
arXiv Detail & Related papers (2021-08-25T06:20:34Z) - A Taxonomy of Empathetic Response Intents in Human Social Conversations [1.52292571922932]
Open-domain conversational agents are becoming increasingly popular in the natural language processing community.
One of the challenges is enabling them to converse in an empathetic manner.
Current neural response generation methods rely solely on end-to-end learning from large scale conversation data to generate dialogues.
Recent work has shown the promise of combining dialogue act/intent modelling and neural response generation.
arXiv Detail & Related papers (2020-12-07T21:56:45Z) - Saying No is An Art: Contextualized Fallback Responses for Unanswerable
Dialogue Queries [3.593955557310285]
Most dialogue systems rely on hybrid approaches for generating a set of ranked responses.
We design a neural approach which generates responses which are contextually aware with the user query.
Our simple approach makes use of rules over dependency parses and a text-to-text transformer fine-tuned on synthetic data of question-response pairs.
arXiv Detail & Related papers (2020-12-03T12:34:22Z) - Knowledge Injection into Dialogue Generation via Language Models [85.65843021510521]
InjK is a two-stage approach to inject knowledge into a dialogue generation model.
First, we train a large-scale language model and query it as textual knowledge.
Second, we frame a dialogue generation model to sequentially generate textual knowledge and a corresponding response.
arXiv Detail & Related papers (2020-04-30T07:31:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.