Modeling Dyadic Conversations for Personality Inference
- URL: http://arxiv.org/abs/2009.12496v1
- Date: Sat, 26 Sep 2020 01:25:42 GMT
- Title: Modeling Dyadic Conversations for Personality Inference
- Authors: Qiang Liu
- Abstract summary: We propose a novel augmented Gated Recurrent Unit (GRU) model for learning unsupervised Personal Conversational Embeddings (PCE) based on dyadic conversations between individuals.
We conduct experiments on the Movie Script dataset, which is collected from conversations between characters in movie scripts.
- Score: 8.19277339277905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, automatical personality inference is drawing extensive attention
from both academia and industry. Conventional methods are mainly based on user
generated contents, e.g., profiles, likes, and texts of an individual, on
social media, which are actually not very reliable. In contrast, dyadic
conversations between individuals can not only capture how one expresses
oneself, but also reflect how one reacts to different situations. Rich
contextual information in dyadic conversation can explain an individual's
response during his or her conversation. In this paper, we propose a novel
augmented Gated Recurrent Unit (GRU) model for learning unsupervised Personal
Conversational Embeddings (PCE) based on dyadic conversations between
individuals. We adjust the formulation of each layer of a conventional GRU with
sequence to sequence learning and personal information of both sides of the
conversation. Based on the learned PCE, we can infer the personality of each
individual. We conduct experiments on the Movie Script dataset, which is
collected from conversations between characters in movie scripts. We find that
modeling dyadic conversations between individuals can significantly improve
personality inference accuracy. Experimental results illustrate the successful
performance of our proposed method.
Related papers
- The Effects of Embodiment and Personality Expression on Learning in LLM-based Educational Agents [0.7499722271664147]
This work investigates how personality expression and embodiment affect personality perception and learning in educational conversational agents.
We extend an existing personality-driven conversational agent framework by integrating LLM-based conversation support tailored to an educational application.
For each personality style, we assess three models: (1) a dialogue-only model that conveys personality through dialogue, (2) an animated human model that expresses personality solely through dialogue, and (3) an animated human model that expresses personality through both dialogue and body and facial animations.
arXiv Detail & Related papers (2024-06-24T09:38:26Z) - PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for
Personality Detection [50.66968526809069]
We propose a novel personality detection method, called PsyCoT, which mimics the way individuals complete psychological questionnaires in a multi-turn dialogue manner.
Our experiments demonstrate that PsyCoT significantly improves the performance and robustness of GPT-3.5 in personality detection.
arXiv Detail & Related papers (2023-10-31T08:23:33Z) - Editing Personality for Large Language Models [73.59001811199823]
This paper introduces an innovative task focused on editing the personality traits of Large Language Models (LLMs)
We construct PersonalityEdit, a new benchmark dataset to address this task.
arXiv Detail & Related papers (2023-10-03T16:02:36Z) - MPCHAT: Towards Multimodal Persona-Grounded Conversation [54.800425322314105]
We extend persona-based dialogue to the multimodal domain and make two main contributions.
First, we present the first multimodal persona-based dialogue dataset named MPCHAT.
Second, we empirically show that incorporating multimodal persona, as measured by three proposed multimodal persona-grounded dialogue tasks, leads to statistically significant performance improvements.
arXiv Detail & Related papers (2023-05-27T06:46:42Z) - Enhancing Personalized Dialogue Generation with Contrastive Latent
Variables: Combining Sparse and Dense Persona [16.90863217077699]
Existing personalized dialogue agents model persona profiles from three resources: sparse or dense persona descriptions and dialogue histories.
We combine the advantages of the three resources to obtain a richer and more accurate persona.
Experimental results on Chinese and English datasets demonstrate our model's superiority in personalization.
arXiv Detail & Related papers (2023-05-19T07:24:27Z) - DialogueNeRF: Towards Realistic Avatar Face-to-Face Conversation Video
Generation [54.84137342837465]
Face-to-face conversations account for the vast majority of daily conversations.
Most existing methods focused on single-person talking head generation.
We propose a novel unified framework based on neural radiance field (NeRF)
arXiv Detail & Related papers (2022-03-15T14:16:49Z) - Learning to Predict Persona Information forDialogue Personalization
without Explicit Persona Description [10.17868476063421]
We propose a novel approach that learns to predict persona information based on the dialogue history to personalize the dialogue agent.
Experimental results on the PersonaChat dataset show that the proposed method can improve the consistency of generated responses.
A trained persona prediction model can be successfully transferred to other datasets and help generate more relevant responses.
arXiv Detail & Related papers (2021-11-30T03:19:24Z) - DLVGen: A Dual Latent Variable Approach to Personalized Dialogue
Generation [28.721411816698563]
We propose a Dual Latent Variable Generator (DLVGen) capable of generating personalized dialogue.
Unlike prior work, DLVGen models the latent distribution over potential responses as well as the latent distribution over the agent's potential persona.
Empirical results show that DLVGen is capable of generating diverse responses which accurately incorporate the agent's persona.
arXiv Detail & Related papers (2021-11-22T17:21:21Z) - Know Deeper: Knowledge-Conversation Cyclic Utilization Mechanism for
Open-domain Dialogue Generation [11.72386584395626]
End-to-End intelligent neural dialogue systems suffer from the problems of generating inconsistent and repetitive responses.
Existing dialogue models pay attention to unilaterally incorporating personal knowledge into the dialog while ignoring the fact that incorporating the personality-related conversation information into personal knowledge taken as the bilateral information flow boosts the quality of the subsequent conversation.
We propose a conversation-adaption multi-view persona aware response generation model that aims at enhancing conversation consistency and alleviating the repetition from two folds.
arXiv Detail & Related papers (2021-07-16T08:59:06Z) - Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots [62.295373408415365]
We propose a personalized hybrid matching network (PHMN) for context-response matching.
Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information.
We evaluate our model on two large datasets with user identification, i.e., personalized dialogue Corpus Ubuntu (P- Ubuntu) and personalized Weibo dataset (P-Weibo)
arXiv Detail & Related papers (2021-03-17T09:42:11Z) - Vyaktitv: A Multimodal Peer-to-Peer Hindi Conversations based Dataset
for Personality Assessment [50.15466026089435]
We present a novel peer-to-peer Hindi conversation dataset- Vyaktitv.
It consists of high-quality audio and video recordings of the participants, with Hinglish textual transcriptions for each conversation.
The dataset also contains a rich set of socio-demographic features, like income, cultural orientation, amongst several others, for all the participants.
arXiv Detail & Related papers (2020-08-31T17:44:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.