Generating Dialogue Responses from a Semantic Latent Space
- URL: http://arxiv.org/abs/2010.01658v1
- Date: Sun, 4 Oct 2020 19:06:16 GMT
- Title: Generating Dialogue Responses from a Semantic Latent Space
- Authors: Wei-Jen Ko and Avik Ray and Yilin Shen and Hongxia Jin
- Abstract summary: We propose an alternative to the end-to-end classification on vocabulary.
We learn the pair relationship between the prompts and responses as a regression task on a latent space.
Human evaluation showed that learning the task on a continuous space can generate responses that are both relevant and informative.
- Score: 75.18449428414736
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing open-domain dialogue generation models are usually trained to mimic
the gold response in the training set using cross-entropy loss on the
vocabulary. However, a good response does not need to resemble the gold
response, since there are multiple possible responses to a given prompt. In
this work, we hypothesize that the current models are unable to integrate
information from multiple semantically similar valid responses of a prompt,
resulting in the generation of generic and uninformative responses. To address
this issue, we propose an alternative to the end-to-end classification on
vocabulary. We learn the pair relationship between the prompts and responses as
a regression task on a latent space instead. In our novel dialog generation
model, the representations of semantically related sentences are close to each
other on the latent space. Human evaluation showed that learning the task on a
continuous space can generate responses that are both relevant and informative.
Related papers
- Hi Model, generating 'nice' instead of 'good' is not as bad as generating 'rice'! Towards Context and Semantic Infused Dialogue Generation Loss Function and Evaluation Metric [46.26506372710482]
We propose a new loss function called Semantic Infused Contextualized diaLogue (SemTextualLogue) loss function.
We also formulate an evaluation metric called Dialuation, incorporating both context and semantic relevance.
arXiv Detail & Related papers (2023-09-11T20:16:38Z) - Promoting Open-domain Dialogue Generation through Learning Pattern
Information between Contexts and Responses [5.936682548344234]
This paper improves the quality of generated responses by learning the implicit pattern information between contexts and responses in the training samples.
We also design a response-aware mechanism for mining the implicit pattern information between contexts and responses so that the generated replies are more diverse and approximate to human replies.
arXiv Detail & Related papers (2023-09-06T08:11:39Z) - Answering Ambiguous Questions via Iterative Prompting [84.3426020642704]
In open-domain question answering, due to the ambiguity of questions, multiple plausible answers may exist.
One approach is to directly predict all valid answers, but this can struggle with balancing relevance and diversity.
We present AmbigPrompt to address the imperfections of existing approaches to answering ambiguous questions.
arXiv Detail & Related papers (2023-07-08T04:32:17Z) - Phrase Retrieval for Open-Domain Conversational Question Answering with
Conversational Dependency Modeling via Contrastive Learning [54.55643652781891]
Open-Domain Conversational Question Answering (ODConvQA) aims at answering questions through a multi-turn conversation.
We propose a method to directly predict answers with a phrase retrieval scheme for a sequence of words.
arXiv Detail & Related papers (2023-06-07T09:46:38Z) - Contextual Dynamic Prompting for Response Generation in Task-oriented
Dialog Systems [8.419582942080927]
Response generation is one of the critical components in task-oriented dialog systems.
We propose an approach that performs textit dynamic prompting where the prompts are learnt from dialog contexts.
We show that contextual dynamic prompts improve response generation in terms of textit combined score citemehri-etal 2019-structured by 3 absolute points.
arXiv Detail & Related papers (2023-01-30T20:26:02Z) - AutoReply: Detecting Nonsense in Dialogue Introspectively with
Discriminative Replies [71.62832112141913]
We show that dialogue models can detect errors in their own messages introspectively, by calculating the likelihood of replies that are indicative of poor messages.
We first show that hand-crafted replies can be effective for the task of detecting nonsense in applications as complex as Diplomacy.
We find that AutoReply-generated replies outperform handcrafted replies and perform on par with carefully fine-tuned large supervised models.
arXiv Detail & Related papers (2022-11-22T22:31:34Z) - Reasoning in Dialog: Improving Response Generation by Context Reading
Comprehension [49.92173751203827]
In multi-turn dialog, utterances do not always take the full form of sentences.
We propose to improve the response generation performance by examining the model's ability to answer a reading comprehension question.
arXiv Detail & Related papers (2020-12-14T10:58:01Z) - Controlling Dialogue Generation with Semantic Exemplars [55.460082747572734]
We present an Exemplar-based Dialogue Generation model, EDGE, that uses the semantic frames present in exemplar responses to guide generation.
We show that controlling dialogue generation based on the semantic frames of exemplars, rather than words in the exemplar itself, improves the coherence of generated responses.
arXiv Detail & Related papers (2020-08-20T17:02:37Z) - Speaker Sensitive Response Evaluation Model [17.381658875470638]
We propose an automatic evaluation model based on the similarity of the generated response with the conversational context.
We learn the model parameters from an unlabeled conversation corpus.
We show that our model can be applied to movie dialogues without any additional training.
arXiv Detail & Related papers (2020-06-12T08:59:10Z) - Diversifying Dialogue Generation with Non-Conversational Text [38.03510529185192]
We propose a new perspective to diversify dialogue generation by leveraging non-conversational text.
We collect a large-scale non-conversational corpus from multi sources including forum comments, idioms and book snippets.
The resulting model is tested on two conversational datasets and is shown to produce significantly more diverse responses without sacrificing the relevance with context.
arXiv Detail & Related papers (2020-05-09T02:16:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.