Affective Conversational Agents: Understanding Expectations and Personal
Influences
- URL: http://arxiv.org/abs/2310.12459v1
- Date: Thu, 19 Oct 2023 04:33:18 GMT
- Title: Affective Conversational Agents: Understanding Expectations and Personal
Influences
- Authors: Javier Hernandez, Jina Suh, Judith Amores, Kael Rowan, Gonzalo Ramos,
and Mary Czerwinski
- Abstract summary: We surveyed 745 respondents to understand the expectations and preferences regarding affective skills in various applications.
Our results indicate a preference for scenarios that involve human interaction, emotional support, and creative tasks.
Overall, the desired affective skills in AI agents depend largely on the application's context and nature.
- Score: 17.059654991560105
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rise of AI conversational agents has broadened opportunities to enhance
human capabilities across various domains. As these agents become more
prevalent, it is crucial to investigate the impact of different affective
abilities on their performance and user experience. In this study, we surveyed
745 respondents to understand the expectations and preferences regarding
affective skills in various applications. Specifically, we assessed preferences
concerning AI agents that can perceive, respond to, and simulate emotions
across 32 distinct scenarios. Our results indicate a preference for scenarios
that involve human interaction, emotional support, and creative tasks, with
influences from factors such as emotional reappraisal and personality traits.
Overall, the desired affective skills in AI agents depend largely on the
application's context and nature, emphasizing the need for adaptability and
context-awareness in the design of affective AI conversational agents.
Related papers
- Exploring Personality-Aware Interactions in Salesperson Dialogue Agents [21.282523537612477]
This study explores the influence of user personas, defined using the Myers-Briggs Type Indicator (MBTI), on the interaction quality and performance of sales-oriented dialogue agents.
Our findings reveal significant patterns in interaction dynamics, task completion rates, and dialogue naturalness, underscoring the future potential for dialogue agents to refine their strategies.
arXiv Detail & Related papers (2025-04-25T04:10:25Z) - Exploring the Impact of Personality Traits on Conversational Recommender Systems: A Simulation with Large Language Models [70.180385882195]
This paper introduces a personality-aware user simulation for Conversational Recommender Systems (CRSs)
The user agent induces customizable personality traits and preferences, while the system agent possesses the persuasion capability to simulate realistic interaction in CRSs.
Experimental results demonstrate that state-of-the-art LLMs can effectively generate diverse user responses aligned with specified personality traits.
arXiv Detail & Related papers (2025-04-09T13:21:17Z) - Persona Dynamics: Unveiling the Impact of Personality Traits on Agents in Text-Based Games [14.443840118369176]
We introduce PANDA: Personality Adapted Neural Decision Agents, a novel method for projecting human personality traits onto agents.
We deploy 16 distinct personality types across 25 text-based games and analyze their trajectories.
These findings underscore the promise of personality-adapted agents for fostering more aligned, effective, and human-centric decision-making in interactive environments.
arXiv Detail & Related papers (2025-04-09T13:17:00Z) - Human Decision-making is Susceptible to AI-driven Manipulation [87.24007555151452]
AI systems may exploit users' cognitive biases and emotional vulnerabilities to steer them toward harmful outcomes.
This study examined human susceptibility to such manipulation in financial and emotional decision-making contexts.
arXiv Detail & Related papers (2025-02-11T15:56:22Z) - CAPE: A Chinese Dataset for Appraisal-based Emotional Generation using Large Language Models [30.40159858361768]
We introduce a two-stage automatic data generation framework to create CAPE, a Chinese dataset named Cognitive Appraisal theory-based Emotional corpus.
This corpus facilitates the generation of dialogues with contextually appropriate emotional responses by accounting for diverse personal and situational factors.
Our study shows the potential for advancing emotional expression in conversational agents, paving the way for more nuanced and meaningful human-computer interactions.
arXiv Detail & Related papers (2024-10-18T03:33:18Z) - Empathy Through Multimodality in Conversational Interfaces [1.360649555639909]
Conversational Health Agents (CHAs) are redefining healthcare by offering nuanced support that transcends textual analysis to incorporate emotional intelligence.
This paper introduces an LLM-based CHA engineered for rich, multimodal dialogue-especially in the realm of mental health support.
It adeptly interprets and responds to users' emotional states by analyzing multimodal cues, thus delivering contextually aware and empathetically resonant verbal responses.
arXiv Detail & Related papers (2024-05-08T02:48:29Z) - Social Life Simulation for Non-Cognitive Skills Learning [7.730401608473805]
We introduce Simulife++, an interactive platform enabled by a large language model (LLM)
The system allows users to act as protagonists, creating stories with one or multiple AI-based characters in diverse social scenarios.
In particular, we expanded the Human-AI interaction to a Human-AI-AI collaboration by including a Sage Agent, who acts as a bystander.
arXiv Detail & Related papers (2024-05-01T01:45:50Z) - AntEval: Evaluation of Social Interaction Competencies in LLM-Driven
Agents [65.16893197330589]
Large Language Models (LLMs) have demonstrated their ability to replicate human behaviors across a wide range of scenarios.
However, their capability in handling complex, multi-character social interactions has yet to be fully explored.
We introduce the Multi-Agent Interaction Evaluation Framework (AntEval), encompassing a novel interaction framework and evaluation methods.
arXiv Detail & Related papers (2024-01-12T11:18:00Z) - Large Language Models Understand and Can be Enhanced by Emotional
Stimuli [53.53886609012119]
We take the first step towards exploring the ability of Large Language Models to understand emotional stimuli.
Our experiments show that LLMs have a grasp of emotional intelligence, and their performance can be improved with emotional prompts.
Our human study results demonstrate that EmotionPrompt significantly boosts the performance of generative tasks.
arXiv Detail & Related papers (2023-07-14T00:57:12Z) - Expanding the Role of Affective Phenomena in Multimodal Interaction
Research [57.069159905961214]
We examined over 16,000 papers from selected conferences in multimodal interaction, affective computing, and natural language processing.
We identify 910 affect-related papers and present our analysis of the role of affective phenomena in these papers.
We find limited research on how affect and emotion predictions might be used by AI systems to enhance machine understanding of human social behaviors and cognitive states.
arXiv Detail & Related papers (2023-05-18T09:08:39Z) - e-Genia3 An AgentSpeak extension for empathic agents [0.0]
e-Genia3 is an extension of AgentSpeak to provide support to the development of empathic agents.
e-Genia3 modifies the agent's reasoning processes to select plans according to the analyzed event and the affective state and personality of the agent.
arXiv Detail & Related papers (2022-08-01T10:53:25Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Towards Emotion-Aware Agents For Negotiation Dialogues [2.1454205511807234]
Negotiation is a complex social interaction that encapsulates emotional encounters in human decision-making.
Virtual agents that can negotiate with humans are useful in pedagogy and conversational AI.
We analyze the extent to which emotion attributes extracted from the negotiation help in the prediction.
arXiv Detail & Related papers (2021-07-28T04:42:36Z) - Disambiguating Affective Stimulus Associations for Robot Perception and
Dialogue [67.89143112645556]
We provide a NICO robot with the ability to learn the associations between a perceived auditory stimulus and an emotional expression.
NICO is able to do this for both individual subjects and specific stimuli, with the aid of an emotion-driven dialogue system.
The robot is then able to use this information to determine a subject's enjoyment of perceived auditory stimuli in a real HRI scenario.
arXiv Detail & Related papers (2021-03-05T20:55:48Z) - SPA: Verbal Interactions between Agents and Avatars in Shared Virtual
Environments using Propositional Planning [61.335252950832256]
Sense-Plan-Ask, or SPA, generates plausible verbal interactions between virtual human-like agents and user avatars in shared virtual environments.
We find that our algorithm creates a small runtime cost and enables agents to complete their goals more effectively than agents without the ability to leverage natural-language communication.
arXiv Detail & Related papers (2020-02-08T23:15:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.