Understanding How People Rate Their Conversations
- URL: http://arxiv.org/abs/2206.00167v1
- Date: Wed, 1 Jun 2022 00:45:32 GMT
- Title: Understanding How People Rate Their Conversations
- Authors: Alexandros Papangelis, Nicole Chartier, Pankaj Rajan, Julia
Hirschberg, Dilek Hakkani-Tur
- Abstract summary: We conduct a study to better understand how people rate their interactions with conversational agents.
We focus on agreeableness and extraversion as variables that may explain variation in ratings.
- Score: 73.17730062864314
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: User ratings play a significant role in spoken dialogue systems. Typically,
such ratings tend to be averaged across all users and then utilized as feedback
to improve the system or personalize its behavior. While this method can be
useful to understand broad, general issues with the system and its behavior, it
does not take into account differences between users that affect their ratings.
In this work, we conduct a study to better understand how people rate their
interactions with conversational agents. One macro-level characteristic that
has been shown to correlate with how people perceive their inter-personal
communication is personality. We specifically focus on agreeableness and
extraversion as variables that may explain variation in ratings and therefore
provide a more meaningful signal for training or personalization. In order to
elicit those personality traits during an interaction with a conversational
agent, we designed and validated a fictional story, grounded in prior work in
psychology. We then implemented the story into an experimental conversational
agent that allowed users to opt-in to hearing the story. Our results suggest
that for human-conversational agent interactions, extraversion may play a role
in user ratings, but more data is needed to determine if the relationship is
significant. Agreeableness, on the other hand, plays a statistically
significant role in conversation ratings: users who are more agreeable are more
likely to provide a higher rating for their interaction. In addition, we found
that users who opted to hear the story were, in general, more likely to rate
their conversational experience higher than those who did not.
Related papers
- Interactive Dialogue Agents via Reinforcement Learning on Hindsight Regenerations [58.65755268815283]
Many real dialogues are interactive, meaning an agent's utterances will influence their conversational partner, elicit information, or change their opinion.
We use this fact to rewrite and augment existing suboptimal data, and train via offline reinforcement learning (RL) an agent that outperforms both prompting and learning from unaltered human demonstrations.
Our results in a user study with real humans show that our approach greatly outperforms existing state-of-the-art dialogue agents.
arXiv Detail & Related papers (2024-11-07T21:37:51Z) - Large Language Models Can Infer Personality from Free-Form User Interactions [0.0]
GPT-4 can infer personality with moderate accuracy, outperforming previous approaches.
Results show that the direct focus on personality assessment did not result in a less positive user experience.
Preliminary analyses suggest that the accuracy of personality inferences varies only marginally across different socio-demographic subgroups.
arXiv Detail & Related papers (2024-05-19T20:33:36Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - CloChat: Understanding How People Customize, Interact, and Experience
Personas in Large Language Models [15.915071948354466]
CloChat is an interface supporting easy and accurate customization of agent personas in large language models.
Results indicate that participants formed emotional bonds with the customized agents, engaged in more dynamic dialogues, and showed interest in sustaining interactions.
arXiv Detail & Related papers (2024-02-23T11:25:17Z) - An Analysis of User Behaviors for Objectively Evaluating Spoken Dialogue
Systems [26.003947740875482]
We investigate the relationship between user behaviors and subjective evaluation scores in social dialogue tasks.
The results reveal that in dialogue tasks where user utterances are primary, like attentive listening and job interview, indicators like the number of utterances and words play a significant role in evaluation.
arXiv Detail & Related papers (2024-01-10T01:02:26Z) - Towards Building a Personalized Dialogue Generator via Implicit User
Persona Detection [0.0]
We consider high-quality transmission is essentially built based on apprehending the persona of the other party.
Motivated by this, we propose a novel personalized dialogue generator by detecting implicit user persona.
arXiv Detail & Related papers (2022-04-15T08:12:10Z) - Revealing Persona Biases in Dialogue Systems [64.96908171646808]
We present the first large-scale study on persona biases in dialogue systems.
We conduct analyses on personas of different social classes, sexual orientations, races, and genders.
In our studies of the Blender and DialoGPT dialogue systems, we show that the choice of personas can affect the degree of harms in generated responses.
arXiv Detail & Related papers (2021-04-18T05:44:41Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z) - IART: Intent-aware Response Ranking with Transformers in
Information-seeking Conversation Systems [80.0781718687327]
We analyze user intent patterns in information-seeking conversations and propose an intent-aware neural response ranking model "IART"
IART is built on top of the integration of user intent modeling and language representation learning with the Transformer architecture.
arXiv Detail & Related papers (2020-02-03T05:59:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.