Enhancing Self-Disclosure In Neural Dialog Models By Candidate
Re-ranking
- URL: http://arxiv.org/abs/2109.05090v3
- Date: Mon, 28 Aug 2023 10:22:14 GMT
- Title: Enhancing Self-Disclosure In Neural Dialog Models By Candidate
Re-ranking
- Authors: Mayank Soni, Benjamin Cowan, Vincent Wade
- Abstract summary: Social penetration theory (SPT) proposes that communication between two people moves from shallow to deeper levels as the relationship progresses primarily through self-disclosure.
In this paper, Self-disclosure enhancement architecture (SDEA) is introduced utilizing Self-disclosure Topic Model (SDTM) to re-rank response candidates to enhance self-disclosure in single-turn responses from from the model.
- Score: 0.7059472280274008
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural language modelling has progressed the state-of-the-art in different
downstream Natural Language Processing (NLP) tasks. One such area is of
open-domain dialog modelling, neural dialog models based on GPT-2 such as
DialoGPT have shown promising performance in single-turn conversation. However,
such (neural) dialog models have been criticized for generating responses which
although may have relevance to the previous human response, tend to quickly
dissipate human interest and descend into trivial conversation. One reason for
such performance is the lack of explicit conversation strategy being employed
in human-machine conversation. Humans employ a range of conversation strategies
while engaging in a conversation, one such key social strategies is
Self-disclosure(SD). A phenomenon of revealing information about one-self to
others. Social penetration theory (SPT) proposes that communication between two
people moves from shallow to deeper levels as the relationship progresses
primarily through self-disclosure. Disclosure helps in creating rapport among
the participants engaged in a conversation. In this paper, Self-disclosure
enhancement architecture (SDEA) is introduced utilizing Self-disclosure Topic
Model (SDTM) during inference stage of a neural dialog model to re-rank
response candidates to enhance self-disclosure in single-turn responses from
from the model.
Related papers
- REALTALK: A 21-Day Real-World Dataset for Long-Term Conversation [51.97224538045096]
We introduce REALTALK, a 21-day corpus of authentic messaging app dialogues.
We compare EI attributes and persona consistency to understand the challenges posed by real-world dialogues.
Our findings reveal that models struggle to simulate a user solely from dialogue history, while fine-tuning on specific user chats improves persona emulation.
arXiv Detail & Related papers (2025-02-18T20:29:01Z) - Applying General Turn-taking Models to Conversational Human-Robot Interaction [3.8673630752805446]
This paper investigates the application of general turn-taking models, specifically TurnGPT and Voice Activity Projection (VAP), to improve conversational dynamics in HRI.
We propose methods for using these models in tandem to predict when a robot should begin preparing responses, take turns, and handle potential interruptions.
arXiv Detail & Related papers (2025-01-15T16:49:22Z) - Channel-aware Decoupling Network for Multi-turn Dialogue Comprehension [81.47133615169203]
We propose compositional learning for holistic interaction across utterances beyond the sequential contextualization from PrLMs.
We employ domain-adaptive training strategies to help the model adapt to the dialogue domains.
Experimental results show that our method substantially boosts the strong PrLM baselines in four public benchmark datasets.
arXiv Detail & Related papers (2023-01-10T13:18:25Z) - Emotion Recognition in Conversation using Probabilistic Soft Logic [17.62924003652853]
emotion recognition in conversation (ERC) is a sub-field of emotion recognition that focuses on conversations that contain two or more utterances.
We implement our approach in a framework called Probabilistic Soft Logic (PSL), a declarative templating language.
PSL provides functionality for the incorporation of results from neural models into PSL models.
We compare our method with state-of-the-art purely neural ERC systems, and see almost a 20% improvement.
arXiv Detail & Related papers (2022-07-14T23:59:06Z) - GODEL: Large-Scale Pre-Training for Goal-Directed Dialog [119.1397031992088]
We introduce GODEL, a large pre-trained language model for dialog.
We show that GODEL outperforms state-of-the-art pre-trained dialog models in few-shot fine-tuning setups.
A novel feature of our evaluation methodology is the introduction of a notion of utility that assesses the usefulness of responses.
arXiv Detail & Related papers (2022-06-22T18:19:32Z) - Model-based analysis of brain activity reveals the hierarchy of language
in 305 subjects [82.81964713263483]
A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli.
Here, we show that a model-based approach can reach equivalent results within subjects exposed to natural stimuli.
arXiv Detail & Related papers (2021-10-12T15:30:21Z) - Emotion Dynamics Modeling via BERT [7.3785751096660555]
We develop a series of BERT-based models to capture the inter-interlocutor and intra-interlocutor dependencies of the conversational emotion dynamics.
Our proposed models can attain around 5% and 10% improvement over the state-of-the-art baselines, respectively.
arXiv Detail & Related papers (2021-04-15T05:58:48Z) - A Taxonomy of Empathetic Response Intents in Human Social Conversations [1.52292571922932]
Open-domain conversational agents are becoming increasingly popular in the natural language processing community.
One of the challenges is enabling them to converse in an empathetic manner.
Current neural response generation methods rely solely on end-to-end learning from large scale conversation data to generate dialogues.
Recent work has shown the promise of combining dialogue act/intent modelling and neural response generation.
arXiv Detail & Related papers (2020-12-07T21:56:45Z) - Are Neural Open-Domain Dialog Systems Robust to Speech Recognition
Errors in the Dialog History? An Empirical Study [10.636793932473426]
We study the effects of various types of synthetic and actual ASR hypotheses in the dialog history on TransferTransfo.
To the best of our knowledge, this is the first study to evaluate the effects of synthetic and actual ASR hypotheses on a state-of-the-art neural open-domain dialog system.
arXiv Detail & Related papers (2020-08-18T00:36:57Z) - Knowledge Injection into Dialogue Generation via Language Models [85.65843021510521]
InjK is a two-stage approach to inject knowledge into a dialogue generation model.
First, we train a large-scale language model and query it as textual knowledge.
Second, we frame a dialogue generation model to sequentially generate textual knowledge and a corresponding response.
arXiv Detail & Related papers (2020-04-30T07:31:24Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.