Performance Analysis and Evaluation of Cloud Vision Emotion APIs
- URL: http://arxiv.org/abs/2303.12974v1
- Date: Thu, 23 Mar 2023 00:47:43 GMT
- Title: Performance Analysis and Evaluation of Cloud Vision Emotion APIs
- Authors: Salik Ram Khanal, Prabin Sharma, Hugo Fernandes, Jo\~ao Barroso,
V\'itor Manuel de Jesus Filipe
- Abstract summary: Performance of two well-known APIs were compared using a public dataset of 980 images of facial emotions.
It has been found that the prediction accuracy for each emotion varies according to the cloud service being used.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Facial expression is a way of communication that can be used to interact with
computers or other electronic devices and the recognition of emotion from faces
is an emerging practice with application in many fields. There are many
cloud-based vision application programming interfaces available that recognize
emotion from facial images and video. In this article, the performances of two
well-known APIs were compared using a public dataset of 980 images of facial
emotions. For these experiments, a client program was developed which iterates
over the image set, calls the cloud services, and caches the results of the
emotion detection for each image. The performance was evaluated in each class
of emotions using prediction accuracy. It has been found that the prediction
accuracy for each emotion varies according to the cloud service being used.
Similarly, each service provider presents a strong variation of performance
according to the class being analyzed, as can be seen with more detail in this
artilects.
Related papers
- Multi-Branch Network for Imagery Emotion Prediction [4.618814297494939]
We present a novel Multi-Branch Network (MBN) to predict both discrete and continuous emotions in an image.
Our proposed method significantly outperforms state-of-the-art methods with 28.4% in mAP and 0.93 in MAE.
arXiv Detail & Related papers (2023-12-12T18:34:56Z) - High-Level Context Representation for Emotion Recognition in Images [4.987022981158291]
We propose an approach for high-level context representation extraction from images.
The model relies on a single cue and a single encoding stream to correlate this representation with emotions.
Our approach is more efficient than previous models and can be easily deployed to address real-world problems related to emotion recognition.
arXiv Detail & Related papers (2023-05-05T13:20:41Z) - Real-time Emotion and Gender Classification using Ensemble CNN [0.0]
This paper is the implementation of an Ensemble CNN for building a real-time system that can detect emotion and gender of the person.
Our work can predict emotion and gender on single face images as well as multiple face images.
arXiv Detail & Related papers (2021-11-15T13:51:35Z) - Multi-Cue Adaptive Emotion Recognition Network [4.570705738465714]
We propose a new deep learning approach for emotion recognition based on adaptive multi-cues.
We compare the proposed approach with the state-of-art approaches in the CAER-S dataset.
arXiv Detail & Related papers (2021-11-03T15:08:55Z) - SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network [83.27291945217424]
We propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images.
To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features.
We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism.
arXiv Detail & Related papers (2021-10-24T02:41:41Z) - Affective Image Content Analysis: Two Decades Review and New
Perspectives [132.889649256384]
We will comprehensively review the development of affective image content analysis (AICA) in the recent two decades.
We will focus on the state-of-the-art methods with respect to three main challenges -- the affective gap, perception subjectivity, and label noise and absence.
We discuss some challenges and promising research directions in the future, such as image content and context understanding, group emotion clustering, and viewer-image interaction.
arXiv Detail & Related papers (2021-06-30T15:20:56Z) - Enhancing Cognitive Models of Emotions with Representation Learning [58.2386408470585]
We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions.
Our framework integrates a contextualized embedding encoder with a multi-head probing model.
Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions.
arXiv Detail & Related papers (2021-04-20T16:55:15Z) - Affect2MM: Affective Analysis of Multimedia Content Using Emotion
Causality [84.69595956853908]
We present Affect2MM, a learning method for time-series emotion prediction for multimedia content.
Our goal is to automatically capture the varying emotions depicted by characters in real-life human-centric situations and behaviors.
arXiv Detail & Related papers (2021-03-11T09:07:25Z) - Modality-Transferable Emotion Embeddings for Low-Resource Multimodal
Emotion Recognition [55.44502358463217]
We propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues.
Our model achieves state-of-the-art performance on most of the emotion categories.
Our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.
arXiv Detail & Related papers (2020-09-21T06:10:39Z) - Real-time Facial Expression Recognition "In The Wild'' by Disentangling
3D Expression from Identity [6.974241731162878]
This paper proposes a novel method for human emotion recognition from a single RGB image.
We construct a large-scale dataset of facial videos, rich in facial dynamics, identities, expressions, appearance and 3D pose variations.
Our proposed framework runs at 50 frames per second and is capable of robustly estimating parameters of 3D expression variation.
arXiv Detail & Related papers (2020-05-12T01:32:55Z) - Emotion Recognition From Gait Analyses: Current Research and Future
Directions [48.93172413752614]
gait conveys information about the walker's emotion.
The mapping between various emotions and gait patterns provides a new source for automated emotion recognition.
gait is remotely observable, more difficult to imitate, and requires less cooperation from the subject.
arXiv Detail & Related papers (2020-03-13T08:22:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.