Affect-Aware Deep Belief Network Representations for Multimodal
Unsupervised Deception Detection
- URL: http://arxiv.org/abs/2108.07897v1
- Date: Tue, 17 Aug 2021 22:07:26 GMT
- Title: Affect-Aware Deep Belief Network Representations for Multimodal
Unsupervised Deception Detection
- Authors: Leena Mathur and Maja J Matari\'c
- Abstract summary: unsupervised approach for detecting real-world, high-stakes deception in videos without requiring labels.
This paper presents our novel approach for affect-aware unsupervised Deep Belief Networks (DBN)
In addition to using facial affect as a feature on which DBN models are trained, we also introduce a DBN training procedure that uses facial affect as an aligner of audio-visual representations.
- Score: 3.04585143845864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated systems that detect the social behavior of deception can enhance
human well-being across medical, social work, and legal domains. Labeled
datasets to train supervised deception detection models can rarely be collected
for real-world, high-stakes contexts. To address this challenge, we propose the
first unsupervised approach for detecting real-world, high-stakes deception in
videos without requiring labels. This paper presents our novel approach for
affect-aware unsupervised Deep Belief Networks (DBN) to learn discriminative
representations of deceptive and truthful behavior. Drawing on psychology
theories that link affect and deception, we experimented with unimodal and
multimodal DBN-based approaches trained on facial valence, facial arousal,
audio, and visual features. In addition to using facial affect as a feature on
which DBN models are trained, we also introduce a DBN training procedure that
uses facial affect as an aligner of audio-visual representations. We conducted
classification experiments with unsupervised Gaussian Mixture Model clustering
to evaluate our approaches. Our best unsupervised approach (trained on facial
valence and visual features) achieved an AUC of 80%, outperforming human
ability and performing comparably to fully-supervised models. Our results
motivate future work on unsupervised, affect-aware computational approaches for
detecting deception and other social behaviors in the wild.
Related papers
- Closely Interactive Human Reconstruction with Proxemics and Physics-Guided Adaption [64.07607726562841]
Existing multi-person human reconstruction approaches mainly focus on recovering accurate poses or avoiding penetration.
In this work, we tackle the task of reconstructing closely interactive humans from a monocular video.
We propose to leverage knowledge from proxemic behavior and physics to compensate the lack of visual information.
arXiv Detail & Related papers (2024-04-17T11:55:45Z) - Unsupervised Video Anomaly Detection for Stereotypical Behaviours in
Autism [20.09315869162054]
This paper focuses on automatically detecting stereotypical behaviours with computer vision techniques.
We propose a Dual Stream deep model for Stereotypical Behaviours Detection, DS-SBD, based on the temporal trajectory of human poses and the repetition patterns of human actions.
arXiv Detail & Related papers (2023-02-27T13:24:08Z) - BI AVAN: Brain inspired Adversarial Visual Attention Network [67.05560966998559]
We propose a brain-inspired adversarial visual attention network (BI-AVAN) to characterize human visual attention directly from functional brain activity.
Our model imitates the biased competition process between attention-related/neglected objects to identify and locate the visual objects in a movie frame the human brain focuses on in an unsupervised manner.
arXiv Detail & Related papers (2022-10-27T22:20:36Z) - Guiding Visual Attention in Deep Convolutional Neural Networks Based on
Human Eye Movements [0.0]
Deep Convolutional Neural Networks (DCNNs) were originally inspired by principles of biological vision.
Recent advances in deep learning seem to decrease this similarity.
We investigate a purely data-driven approach to obtain useful models.
arXiv Detail & Related papers (2022-06-21T17:59:23Z) - Behind the Machine's Gaze: Biologically Constrained Neural Networks
Exhibit Human-like Visual Attention [40.878963450471026]
We propose the Neural Visual Attention (NeVA) algorithm to generate visual scanpaths in a top-down manner.
We show that the proposed method outperforms state-of-the-art unsupervised human attention models in terms of similarity to human scanpaths.
arXiv Detail & Related papers (2022-04-19T18:57:47Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Unsupervised Audio-Visual Subspace Alignment for High-Stakes Deception
Detection [3.04585143845864]
Automated systems that detect deception in high-stakes situations can enhance societal well-being across medical, social work, and legal domains.
Existing models for detecting high-stakes deception in videos have been supervised, but labeled datasets to train models can rarely be collected for most real-world applications.
We propose the first multimodal unsupervised transfer learning approach that detects real-world, high-stakes deception in videos without using high-stakes labels.
arXiv Detail & Related papers (2021-02-06T21:53:12Z) - Introducing Representations of Facial Affect in Automated Multimodal
Deception Detection [18.16596562087374]
Automated deception detection systems can enhance health, justice, and security in society.
This paper presents a novel analysis of the power of dimensional representations of facial affect for automated deception detection.
We used a video dataset of people communicating truthfully or deceptively in real-world, high-stakes courtroom situations.
arXiv Detail & Related papers (2020-08-31T05:12:57Z) - Noisy Agents: Self-supervised Exploration by Predicting Auditory Events [127.82594819117753]
We propose a novel type of intrinsic motivation for Reinforcement Learning (RL) that encourages the agent to understand the causal effect of its actions.
We train a neural network to predict the auditory events and use the prediction errors as intrinsic rewards to guide RL exploration.
Experimental results on Atari games show that our new intrinsic motivation significantly outperforms several state-of-the-art baselines.
arXiv Detail & Related papers (2020-07-27T17:59:08Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.