Emotion Recognition for In-the-wild Videos
- URL: http://arxiv.org/abs/2002.05447v1
- Date: Thu, 13 Feb 2020 11:29:46 GMT
- Title: Emotion Recognition for In-the-wild Videos
- Authors: Hanyu Liu, Jiabei Zeng, Shiguang Shan and Xilin Chen
- Abstract summary: This paper is a brief introduction to our submission to the seven basic expression classification track of Affective Behavior Analysis in-the-wild Competition held in conjunction with the IEEE International Conference on Automatic Face and Gesture Recognition (FG) 2020.
Our method combines Deep Residual Network (ResNet) and Bidirectional Long Short-Term Memory Network (BLSTM), achieving 64.3% accuracy and 43.4% final metric on the validation set.
- Score: 92.01434273996097
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper is a brief introduction to our submission to the seven basic
expression classification track of Affective Behavior Analysis in-the-wild
Competition held in conjunction with the IEEE International Conference on
Automatic Face and Gesture Recognition (FG) 2020. Our method combines Deep
Residual Network (ResNet) and Bidirectional Long Short-Term Memory Network
(BLSTM), achieving 64.3% accuracy and 43.4% final metric on the validation set.
Related papers
- Enhancing Facial Expression Recognition through Dual-Direction Attention Mixed Feature Networks and CLIP: Application to 8th ABAW Challenge [1.0374615809135401]
We present our contribution to the 8th ABAW challenge at CVPR 2025.
We tackle valence-arousal estimation, emotion recognition, and facial action unit detection as three independent challenges.
Our approach leverages the well-known Dual-Direction Attention Mixed Feature Network (DDAMFN) for all three tasks, achieving results that surpass the proposed baselines.
arXiv Detail & Related papers (2025-03-15T21:03:03Z) - Design of an Expression Recognition Solution Based on the Global Channel-Spatial Attention Mechanism and Proportional Criterion Fusion [11.506800500772734]
This paper aims to introduce the method we will adopt in the 8th Affective and Behavioral Analysis in the Wild (ABAW) Competition.
Based on the residual hybrid convolutional neural network and the multi-branch convolutional neural network respectively, we design feature extraction models for image and audio sequences.
In the facial expression recognition task of the 8th ABAW Competition, our method ranked third on the official validation set.
arXiv Detail & Related papers (2025-03-15T00:59:34Z) - The 6th Affective Behavior Analysis in-the-wild (ABAW) Competition [53.718777420180395]
This paper describes the 6th Affective Behavior Analysis in-the-wild (ABAW) Competition.
The 6th ABAW Competition addresses contemporary challenges in understanding human emotions and behaviors.
arXiv Detail & Related papers (2024-02-29T16:49:38Z) - ABAW: Valence-Arousal Estimation, Expression Recognition, Action Unit
Detection & Multi-Task Learning Challenges [4.273075747204267]
This paper describes the third Affective Behavior Analysis in-the-wild (ABAW) Competition, held in conjunction with IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
arXiv Detail & Related papers (2022-02-22T04:02:17Z) - Real-time EEG-based Emotion Recognition using Discrete Wavelet
Transforms on Full and Reduced Channel Signals [0.0]
Real-time EEG-based Emotion Recognition (EEG-ER) with consumer-grade EEG devices involves classification of emotions using a reduced number of channels.
These devices typically provide only four or five channels, unlike the high number of channels typically used in most current state-of-the-art research.
We propose to use Discrete Wavelet Transforms (DWT) to extract time-frequency domain features, and we use time-windows of a few seconds to perform EEG-ER classification.
arXiv Detail & Related papers (2021-10-11T22:28:43Z) - G-DetKD: Towards General Distillation Framework for Object Detectors via
Contrastive and Semantic-guided Feature Imitation [49.421099172544196]
We propose a novel semantic-guided feature imitation technique, which automatically performs soft matching between feature pairs across all pyramid levels.
We also introduce contrastive distillation to effectively capture the information encoded in the relationship between different feature regions.
Our method consistently outperforms the existing detection KD techniques, and works when (1) components in the framework are used separately and in conjunction.
arXiv Detail & Related papers (2021-08-17T07:44:27Z) - Spatial and Temporal Networks for Facial Expression Recognition in the
Wild Videos [14.760435737320744]
The paper describes our proposed methodology for the seven basic expression classification track of Affective Behavior Analysis in-the-wild (ABAW) Competition 2021.
Our ensemble model achieved F1 as 0.4133, accuracy as 0.6216 and final metric as 0.4821 on the validation set.
arXiv Detail & Related papers (2021-07-12T01:41:23Z) - Deep Convolutional Neural Network Based Facial Expression Recognition in
the Wild [0.0]
We have used a proposed deep convolutional neural network (CNN) model to perform automatic facial expression recognition (AFER) on the given dataset.
Our proposed model has achieved an accuracy of 50.77% and an F1 score of 29.16% on the validation set.
arXiv Detail & Related papers (2020-10-03T08:17:00Z) - Towards a Competitive End-to-End Speech Recognition for CHiME-6 Dinner
Party Transcription [73.66530509749305]
In this paper, we argue that, even in difficult cases, some end-to-end approaches show performance close to the hybrid baseline.
We experimentally compare and analyze CTC-Attention versus RNN-Transducer approaches along with RNN versus Transformer architectures.
Our best end-to-end model based on RNN-Transducer, together with improved beam search, reaches quality by only 3.8% WER abs. worse than the LF-MMI TDNN-F CHiME-6 Challenge baseline.
arXiv Detail & Related papers (2020-04-22T19:08:33Z) - Recognizing Families In the Wild: White Paper for the 4th Edition Data
Challenge [91.55319616114943]
This paper summarizes the supported tasks (i.e., kinship verification, tri-subject verification, and search & retrieval of missing children) in the Recognizing Families In the Wild (RFIW) evaluation.
The purpose of this paper is to describe the 2020 RFIW challenge, end-to-end, along with forecasts in promising future directions.
arXiv Detail & Related papers (2020-02-15T02:22:42Z) - Analysing Affective Behavior in the First ABAW 2020 Competition [49.90617840789334]
The Affective Behavior Analysis in-the-wild (ABAW) 2020 Competition is the first Competition aiming at automatic analysis of the three main behavior tasks.
We describe this Competition, to be held in conjunction with the IEEE Conference on Face and Gesture Recognition, May 2020, in Buenos Aires, Argentina.
We outline the evaluation metrics, present both the baseline system and the top-3 performing teams' methodologies per Challenge and finally present their obtained results.
arXiv Detail & Related papers (2020-01-30T15:41:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.