Weaknesses of Facial Emotion Recognition Systems
- URL: http://arxiv.org/abs/2601.12402v1
- Date: Sun, 18 Jan 2026 13:27:01 GMT
- Title: Weaknesses of Facial Emotion Recognition Systems
- Authors: Aleksandra Jamróz, Patrycja Wysocka, Piotr Garbat,
- Abstract summary: Emotion detection from faces is one of the machine learning problems needed for human-computer interaction.<n>Three of the most interesting and best solutions are selected, followed by the selection of three datasets that stood out for the diversity and number of images in them.<n>The selected neural networks are trained, and then a series of experiments are performed to compare their performance.
- Score: 41.99844472131922
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Emotion detection from faces is one of the machine learning problems needed for human-computer interaction. The variety of methods used is enormous, which motivated an in-depth review of articles and scientific studies. Three of the most interesting and best solutions are selected, followed by the selection of three datasets that stood out for the diversity and number of images in them. The selected neural networks are trained, and then a series of experiments are performed to compare their performance, including testing on different datasets than a model was trained on. This reveals weaknesses in existing solutions, including differences between datasets, unequal levels of difficulty in recognizing certain emotions and the challenges in differentiating between closely related emotions.
Related papers
- REFS: Robust EEG feature selection with missing multi-dimensional annotation for emotion recognition [6.8109977763829885]
The affective brain-computer interface is a crucial technology for affective interaction and emotional intelligence.<n>The high dimensionality of multi-type EEG features, combined with the relatively small number of high-quality EEG samples, poses challenges in emotion recognition.<n>This study proposes a novel EEG feature selection method for missing multi-dimensional emotion recognition.
arXiv Detail & Related papers (2025-08-08T01:53:46Z) - Dynamic Modality and View Selection for Multimodal Emotion Recognition with Missing Modalities [46.543216927386005]
Multiple channels, such as speech (voice) and facial expressions (image) are crucial in understanding human emotions.
One significant hurdle is how AI models manage the absence of a particular modality.
This study's central focus is assessing the performance and resilience of two strategies when confronted with the lack of one modality.
arXiv Detail & Related papers (2024-04-18T15:18:14Z) - Deep Imbalanced Learning for Multimodal Emotion Recognition in
Conversations [15.705757672984662]
Multimodal Emotion Recognition in Conversations (MERC) is a significant development direction for machine intelligence.
Many data in MERC naturally exhibit an imbalanced distribution of emotion categories, and researchers ignore the negative impact of imbalanced data on emotion recognition.
We propose the Class Boundary Enhanced Representation Learning (CBERL) model to address the imbalanced distribution of emotion categories in raw data.
We have conducted extensive experiments on the IEMOCAP and MELD benchmark datasets, and the results show that CBERL has achieved a certain performance improvement in the effectiveness of emotion recognition.
arXiv Detail & Related papers (2023-12-11T12:35:17Z) - Implicit Design Choices and Their Impact on Emotion Recognition Model
Development and Evaluation [5.534160116442057]
The subjectivity of emotions poses significant challenges in developing accurate and robust computational models.
This thesis examines critical facets of emotion recognition, beginning with the collection of diverse datasets.
To handle the challenge of non-representative training data, this work collects the Multimodal Stressed Emotion dataset.
arXiv Detail & Related papers (2023-09-06T02:45:42Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - A cross-corpus study on speech emotion recognition [29.582678406878568]
This study investigates whether information learnt from acted emotions is useful for detecting natural emotions.
Four adult English datasets covering acted, elicited and natural emotions are considered.
A state-of-the-art model is proposed to accurately investigate the degradation of performance.
arXiv Detail & Related papers (2022-07-05T15:15:22Z) - SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network [83.27291945217424]
We propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images.
To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features.
We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism.
arXiv Detail & Related papers (2021-10-24T02:41:41Z) - Stimuli-Aware Visual Emotion Analysis [75.68305830514007]
We propose a stimuli-aware visual emotion analysis (VEA) method consisting of three stages, namely stimuli selection, feature extraction and emotion prediction.
To the best of our knowledge, it is the first time to introduce stimuli selection process into VEA in an end-to-end network.
Experiments demonstrate that the proposed method consistently outperforms the state-of-the-art approaches on four public visual emotion datasets.
arXiv Detail & Related papers (2021-09-04T08:14:52Z) - Affective Image Content Analysis: Two Decades Review and New
Perspectives [132.889649256384]
We will comprehensively review the development of affective image content analysis (AICA) in the recent two decades.
We will focus on the state-of-the-art methods with respect to three main challenges -- the affective gap, perception subjectivity, and label noise and absence.
We discuss some challenges and promising research directions in the future, such as image content and context understanding, group emotion clustering, and viewer-image interaction.
arXiv Detail & Related papers (2021-06-30T15:20:56Z) - Variants of BERT, Random Forests and SVM approach for Multimodal
Emotion-Target Sub-challenge [11.71437054341057]
We present and discuss our classification methodology for MuSe-Topic Sub-challenge.
We ensemble two language models which are ALBERT and RoBERTa to predict 10 classes of topics.
arXiv Detail & Related papers (2020-07-28T01:15:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.