Utilizing Mood-Inducing Background Music in Human-Robot Interaction
- URL: http://arxiv.org/abs/2308.14269v1
- Date: Mon, 28 Aug 2023 02:54:05 GMT
- Title: Utilizing Mood-Inducing Background Music in Human-Robot Interaction
- Authors: Elad Liebman, Peter Stone
- Abstract summary: This paper reports the results of an experiment in which human participants were required to complete a task in the presence of an autonomous agent while listening to background music.
Our results clearly indicate that such background information can be effectively incorporated in an agent's world representation in order to better predict people's behavior.
- Score: 46.05195048997979
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Past research has clearly established that music can affect mood and that
mood affects emotional and cognitive processing, and thus decision-making. It
follows that if a robot interacting with a person needs to predict the person's
behavior, knowledge of the music the person is listening to when acting is a
potentially relevant feature. To date, however, there has not been any concrete
evidence that a robot can improve its human-interactive decision-making by
taking into account what the person is listening to. This research fills this
gap by reporting the results of an experiment in which human participants were
required to complete a task in the presence of an autonomous agent while
listening to background music. Specifically, the participants drove a simulated
car through an intersection while listening to music. The intersection was not
empty, as another simulated vehicle, controlled autonomously, was also crossing
the intersection in a different direction. Our results clearly indicate that
such background information can be effectively incorporated in an agent's world
representation in order to better predict people's behavior. We subsequently
analyze how knowledge of music impacted both participant behavior and the
resulting learned policy.\setcounter{footnote}{2}\footnote{An earlier version
of part of the material in this paper appeared originally in the first author's
Ph.D. Dissertation~\cite{liebman2020sequential} but it has not appeared in any
pear-reviewed conference or journal.}
Related papers
- SIFToM: Robust Spoken Instruction Following through Theory of Mind [51.326266354164716]
We present a cognitively inspired model, Speech Instruction Following through Theory of Mind (SIFToM), to enable robots to pragmatically follow human instructions under diverse speech conditions.
Results show that the SIFToM model outperforms state-of-the-art speech and language models, approaching human-level accuracy on challenging speech instruction following tasks.
arXiv Detail & Related papers (2024-09-17T02:36:10Z) - Exploring and Applying Audio-Based Sentiment Analysis in Music [0.0]
The ability of a computational model to interpret musical emotions is largely unexplored.
This study seeks to (1) predict the emotion of a musical clip over time and (2) determine the next emotion value after the music in a time series to ensure seamless transitions.
arXiv Detail & Related papers (2024-02-22T22:34:06Z) - Learning to Influence Human Behavior with Offline Reinforcement Learning [70.7884839812069]
We focus on influence in settings where there is a need to capture human suboptimality.
Experiments online with humans is potentially unsafe, and creating a high-fidelity simulator of the environment is often impractical.
We show that offline reinforcement learning can learn to effectively influence suboptimal humans by extending and combining elements of observed human-human behavior.
arXiv Detail & Related papers (2023-03-03T23:41:55Z) - Affective Idiosyncratic Responses to Music [63.969810774018775]
We develop methods to measure affective responses to music from over 403M listener comments on a Chinese social music platform.
We test for musical, lyrical, contextual, demographic, and mental health effects that drive listener affective responses.
arXiv Detail & Related papers (2022-10-17T19:57:46Z) - The world seems different in a social context: a neural network analysis
of human experimental data [57.729312306803955]
We show that it is possible to replicate human behavioral data in both individual and social task settings by modifying the precision of prior and sensory signals.
An analysis of the neural activation traces of the trained networks provides evidence that information is coded in fundamentally different ways in the network in the individual and in the social conditions.
arXiv Detail & Related papers (2022-03-03T17:19:12Z) - Towards a Real-time Measure of the Perception of Anthropomorphism in
Human-robot Interaction [5.112850258732114]
We conducted an online human-robot interaction experiment in an educational use-case scenario.
43 English-speaking participants took part in the study.
We found that the degree of subjective and objective perception of anthropomorphism positively correlates with acoustic-prosodic entrainment.
arXiv Detail & Related papers (2022-01-24T11:10:37Z) - Responsive Listening Head Generation: A Benchmark Dataset and Baseline [58.168958284290156]
We define the responsive listening head generation task as the synthesis of a non-verbal head with motions and expressions reacting to the multiple inputs.
Unlike speech-driven gesture or talking head generation, we introduce more modals in this task, hoping to benefit several research fields.
arXiv Detail & Related papers (2021-12-27T07:18:50Z) - Emotion Recognition of the Singing Voice: Toward a Real-Time Analysis
Tool for Singers [0.0]
Current computational-emotion research has focused on applying acoustic properties to analyze how emotions are perceived mathematically.
This paper seeks to reflect and expand upon the findings of related research and present a stepping-stone toward this end goal.
arXiv Detail & Related papers (2021-05-01T05:47:15Z) - Let's be friends! A rapport-building 3D embodied conversational agent
for the Human Support Robot [0.0]
Partial subtle mirroring of nonverbal behaviors during conversations (also known as mimicking or parallel empathy) is essential for rapport building.
Our research question is whether integrating an ECA able to mirror its interlocutor's facial expressions and head movements with a human-service robot will improve the user's experience.
Our contribution is the complex integration of an expressive ECA, able to track its interlocutor's face, and to mirror his/her facial expressions and head movements in real time, integrated with a human support robot.
arXiv Detail & Related papers (2021-03-08T01:02:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.