Domain Adaptation for Facial Expression Classifier via Domain
Discrimination and Gradient Reversal
- URL: http://arxiv.org/abs/2106.01467v1
- Date: Wed, 2 Jun 2021 20:58:24 GMT
- Title: Domain Adaptation for Facial Expression Classifier via Domain
Discrimination and Gradient Reversal
- Authors: Kamil Akhmetov
- Abstract summary: The research in the field of Facial Expression Recognition (FER) has acquired increased interest over the past decade.
We propose a new architecture for the task of FER and examine the impact of domain discrimination loss regularization on the learning process.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Bringing empathy to a computerized system could significantly improve the
quality of human-computer communications, as soon as machines would be able to
understand customer intentions and better serve their needs. According to
different studies (Literature Review), visual information is one of the most
important channels of human interaction and contains significant behavioral
signals, that may be captured from facial expressions. Therefore, it is
consistent and natural that the research in the field of Facial Expression
Recognition (FER) has acquired increased interest over the past decade due to
having diverse application area including health-care, sociology, psychology,
driver-safety, virtual reality, cognitive sciences, security, entertainment,
marketing, etc. We propose a new architecture for the task of FER and examine
the impact of domain discrimination loss regularization on the learning
process. With regard to observations, including both classical training
conditions and unsupervised domain adaptation scenarios, important aspects of
the considered domain adaptation approach integration are traced. The results
may serve as a foundation for further research in the field.
Related papers
- Object recognition in primates: What can early visual areas contribute? [0.0]
We investigate how signals carried by early visual processing areas could be used for object recognition in the periphery.
Models of V1 simple or complex cells could provide quite reliable information, resulting in performance better than 80% in realistic scenarios.
We propose that object recognition should be seen as a parallel process, with high-accuracy foveal modules operating in parallel with lower-accuracy and faster modules that can operate across the visual field.
arXiv Detail & Related papers (2024-07-05T18:57:09Z) - Towards Subject Agnostic Affective Emotion Recognition [8.142798657174332]
EEG signals manifest subject instability in subject-agnostic affective Brain-computer interfaces (aBCIs)
We propose a novel framework, meta-learning based augmented domain adaptation for subject-agnostic aBCIs.
Our proposed approach is shown to be effective in experiments on a public aBICs dataset.
arXiv Detail & Related papers (2023-10-20T23:44:34Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Facial Expression Recognition using Squeeze and Excitation-powered Swin
Transformers [0.0]
We propose a framework that employs Swin Vision Transformers (SwinT) and squeeze and excitation block (SE) to address vision tasks.
Our focus was to create an efficient FER model based on SwinT architecture that can recognize facial emotions using minimal data.
We trained our model on a hybrid dataset and evaluated its performance on the AffectNet dataset, achieving an F1-score of 0.5420.
arXiv Detail & Related papers (2023-01-26T02:29:17Z) - I am Only Happy When There is Light: The Impact of Environmental Changes
on Affective Facial Expressions Recognition [65.69256728493015]
We study the impact of different image conditions on the recognition of arousal from human facial expressions.
Our results show how the interpretation of human affective states can differ greatly in either the positive or negative direction.
arXiv Detail & Related papers (2022-10-28T16:28:26Z) - Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units
and a Unified Framework [83.21732533130846]
The paper focuses on large in-the-wild databases, i.e., Aff-Wild and Aff-Wild2.
It presents the design of two classes of deep neural networks trained with these databases.
A novel multi-task and holistic framework is presented which is able to jointly learn and effectively generalize and perform affect recognition.
arXiv Detail & Related papers (2021-03-29T17:36:20Z) - Learning Emotional-Blinded Face Representations [77.7653702071127]
We propose two face representations that are blind to facial expressions associated to emotional responses.
This work is motivated by new international regulations for personal data protection.
arXiv Detail & Related papers (2020-09-18T09:24:10Z) - Introducing Representations of Facial Affect in Automated Multimodal
Deception Detection [18.16596562087374]
Automated deception detection systems can enhance health, justice, and security in society.
This paper presents a novel analysis of the power of dimensional representations of facial affect for automated deception detection.
We used a video dataset of people communicating truthfully or deceptively in real-world, high-stakes courtroom situations.
arXiv Detail & Related papers (2020-08-31T05:12:57Z) - Adversarial Graph Representation Adaptation for Cross-Domain Facial
Expression Recognition [86.25926461936412]
We propose a novel Adrialversa Graph Representation Adaptation (AGRA) framework that unifies graph representation propagation with adversarial learning for cross-domain holistic-local feature co-adaptation.
We conduct extensive and fair experiments on several popular benchmarks and show that the proposed AGRA framework achieves superior performance over previous state-of-the-art methods.
arXiv Detail & Related papers (2020-08-03T13:27:24Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.