Human Emotion Detection from Audio and Video Signals
- URL: http://arxiv.org/abs/2006.11871v1
- Date: Sun, 21 Jun 2020 18:36:23 GMT
- Title: Human Emotion Detection from Audio and Video Signals
- Authors: Sai Nikhil Chennoor, B.R.K. Madhur, Moujiz Ali, T. Kishore Kumar
- Abstract summary: The ability of a machine to understand human emotion and act accordingly has been a choice of great interest in today's world.
This model explicitly targets the userbase who are troubled and fail to express it.
Also, this model's speech processing techniques provide an estimate of the emotion in the case of poor video quality and vice-versa.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The primary objective is to teach a machine about human emotions, which has
become an essential requirement in the field of social intelligence, also
expedites the progress of human-machine interactions. The ability of a machine
to understand human emotion and act accordingly has been a choice of great
interest in today's world. The future generations of computers thus must be
able to interact with a human being just like another. For example, people who
have Autism often find it difficult to talk to someone about their state of
mind. This model explicitly targets the userbase who are troubled and fail to
express it. Also, this model's speech processing techniques provide an estimate
of the emotion in the case of poor video quality and vice-versa.
Related papers
- Speech Emotion Recognition Using CNN and Its Use Case in Digital Healthcare [0.0]
The process of identifying human emotion and affective states from speech is known as speech emotion recognition (SER)
My research seeks to use the Convolutional Neural Network (CNN) to distinguish emotions from audio recordings and label them in accordance with the range of different emotions.
I have developed a machine learning model to identify emotions from supplied audio files with the aid of machine learning methods.
arXiv Detail & Related papers (2024-06-15T21:33:03Z) - Improved Emotional Alignment of AI and Humans: Human Ratings of Emotions Expressed by Stable Diffusion v1, DALL-E 2, and DALL-E 3 [10.76478480925475]
Generative AI systems are increasingly capable of expressing emotions via text and imagery.
We measure the alignment between emotions expressed by generative AI and human perceptions.
We show that the alignment significantly depends upon the AI model used and the emotion itself.
arXiv Detail & Related papers (2024-05-28T18:26:57Z) - Humane Speech Synthesis through Zero-Shot Emotion and Disfluency Generation [0.6964027823688135]
Modern conversational systems lack emotional depth and disfluent characteristic of human interactions.
To address this shortcoming, we have designed an innovative speech synthesis pipeline.
Within this framework, a cutting-edge language model introduces both human-like emotion and disfluencies in a zero-shot setting.
arXiv Detail & Related papers (2024-03-31T00:38:02Z) - The Good, The Bad, and Why: Unveiling Emotions in Generative AI [73.94035652867618]
We show that EmotionPrompt can boost the performance of AI models while EmotionAttack can hinder it.
EmotionDecode reveals that AI models can comprehend emotional stimuli akin to the mechanism of dopamine in the human brain.
arXiv Detail & Related papers (2023-12-18T11:19:45Z) - Socratis: Are large multimodal models emotionally aware? [63.912414283486555]
Existing emotion prediction benchmarks do not consider the diversity of emotions that an image and text can elicit in humans due to various reasons.
We propose Socratis, a societal reactions benchmark, where each image-caption (IC) pair is annotated with multiple emotions and the reasons for feeling them.
We benchmark the capability of state-of-the-art multimodal large language models to generate the reasons for feeling an emotion given an IC pair.
arXiv Detail & Related papers (2023-08-31T13:59:35Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - HICEM: A High-Coverage Emotion Model for Artificial Emotional
Intelligence [9.153146173929935]
Next-generation artificial emotional intelligence (AEI) is taking center stage to address users' desire for deeper, more meaningful human-machine interaction.
Unlike theory of emotion, which has been the historical focus in psychology, emotion models are a descriptive tools.
This work has broad implications in social robotics, human-machine interaction, mental healthcare, and computational psychology.
arXiv Detail & Related papers (2022-06-15T15:21:30Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Emotion-aware Chat Machine: Automatic Emotional Response Generation for
Human-like Emotional Interaction [55.47134146639492]
This article proposes a unifed end-to-end neural architecture, which is capable of simultaneously encoding the semantics and the emotions in a post.
Experiments on real-world data demonstrate that the proposed method outperforms the state-of-the-art methods in terms of both content coherence and emotion appropriateness.
arXiv Detail & Related papers (2021-06-06T06:26:15Z) - Disambiguating Affective Stimulus Associations for Robot Perception and
Dialogue [67.89143112645556]
We provide a NICO robot with the ability to learn the associations between a perceived auditory stimulus and an emotional expression.
NICO is able to do this for both individual subjects and specific stimuli, with the aid of an emotion-driven dialogue system.
The robot is then able to use this information to determine a subject's enjoyment of perceived auditory stimuli in a real HRI scenario.
arXiv Detail & Related papers (2021-03-05T20:55:48Z) - Modeling emotion for human-like behavior in future intelligent robots [0.913755431537592]
We show how neuroscience can help advance the current state of the art.
We argue that a stronger integration of emotion-related processes in robot models is critical for the design of human-like behavior.
arXiv Detail & Related papers (2020-09-30T17:32:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.