Introducing Representations of Facial Affect in Automated Multimodal
Deception Detection
- URL: http://arxiv.org/abs/2008.13369v1
- Date: Mon, 31 Aug 2020 05:12:57 GMT
- Title: Introducing Representations of Facial Affect in Automated Multimodal
Deception Detection
- Authors: Leena Mathur and Maja J Matari\'c
- Abstract summary: Automated deception detection systems can enhance health, justice, and security in society.
This paper presents a novel analysis of the power of dimensional representations of facial affect for automated deception detection.
We used a video dataset of people communicating truthfully or deceptively in real-world, high-stakes courtroom situations.
- Score: 18.16596562087374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated deception detection systems can enhance health, justice, and
security in society by helping humans detect deceivers in high-stakes
situations across medical and legal domains, among others. This paper presents
a novel analysis of the discriminative power of dimensional representations of
facial affect for automated deception detection, along with interpretable
features from visual, vocal, and verbal modalities. We used a video dataset of
people communicating truthfully or deceptively in real-world, high-stakes
courtroom situations. We leveraged recent advances in automated emotion
recognition in-the-wild by implementing a state-of-the-art deep neural network
trained on the Aff-Wild database to extract continuous representations of
facial valence and facial arousal from speakers. We experimented with unimodal
Support Vector Machines (SVM) and SVM-based multimodal fusion methods to
identify effective features, modalities, and modeling approaches for detecting
deception. Unimodal models trained on facial affect achieved an AUC of 80%, and
facial affect contributed towards the highest-performing multimodal approach
(adaptive boosting) that achieved an AUC of 91% when tested on speakers who
were not part of training sets. This approach achieved a higher AUC than
existing automated machine learning approaches that used interpretable visual,
vocal, and verbal features to detect deception in this dataset, but did not use
facial affect. Across all videos, deceptive and truthful speakers exhibited
significant differences in facial valence and facial arousal, contributing
computational support to existing psychological theories on affect and
deception. The demonstrated importance of facial affect in our models informs
and motivates the future development of automated, affect-aware machine
learning approaches for modeling and detecting deception and other social
behaviors in-the-wild.
Related papers
- Facial Expression Recognition using Squeeze and Excitation-powered Swin
Transformers [0.0]
We propose a framework that employs Swin Vision Transformers (SwinT) and squeeze and excitation block (SE) to address vision tasks.
Our focus was to create an efficient FER model based on SwinT architecture that can recognize facial emotions using minimal data.
We trained our model on a hybrid dataset and evaluated its performance on the AffectNet dataset, achieving an F1-score of 0.5420.
arXiv Detail & Related papers (2023-01-26T02:29:17Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Affect-Aware Deep Belief Network Representations for Multimodal
Unsupervised Deception Detection [3.04585143845864]
unsupervised approach for detecting real-world, high-stakes deception in videos without requiring labels.
This paper presents our novel approach for affect-aware unsupervised Deep Belief Networks (DBN)
In addition to using facial affect as a feature on which DBN models are trained, we also introduce a DBN training procedure that uses facial affect as an aligner of audio-visual representations.
arXiv Detail & Related papers (2021-08-17T22:07:26Z) - I Only Have Eyes for You: The Impact of Masks On Convolutional-Based
Facial Expression Recognition [78.07239208222599]
We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks.
We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
arXiv Detail & Related papers (2021-04-16T20:03:30Z) - Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units
and a Unified Framework [83.21732533130846]
The paper focuses on large in-the-wild databases, i.e., Aff-Wild and Aff-Wild2.
It presents the design of two classes of deep neural networks trained with these databases.
A novel multi-task and holistic framework is presented which is able to jointly learn and effectively generalize and perform affect recognition.
arXiv Detail & Related papers (2021-03-29T17:36:20Z) - A Multi-resolution Approach to Expression Recognition in the Wild [9.118706387430883]
We propose a multi-resolution approach to solve the Facial Expression Recognition task.
We ground our intuition on the observation that often faces images are acquired at different resolutions.
To our aim, we use a ResNet-like architecture, equipped with Squeeze-and-Excitation blocks, trained on the Affect-in-the-Wild 2 dataset.
arXiv Detail & Related papers (2021-03-09T21:21:02Z) - Learning Emotional-Blinded Face Representations [77.7653702071127]
We propose two face representations that are blind to facial expressions associated to emotional responses.
This work is motivated by new international regulations for personal data protection.
arXiv Detail & Related papers (2020-09-18T09:24:10Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.