Deception Detection in Videos using the Facial Action Coding System
- URL: http://arxiv.org/abs/2105.13659v1
- Date: Fri, 28 May 2021 08:10:21 GMT
- Title: Deception Detection in Videos using the Facial Action Coding System
- Authors: Hammad Ud Din Ahmed, Usama Ijaz Bajwa, Fan Zhang, Muhammad Waqas Anwar
- Abstract summary: In our approach, we extract facial action units using the facial action coding system which we use as parameters for training a deep learning model.
We specifically use long short-term memory (LSTM) which we trained using the real-life trial dataset.
We also tested cross-dataset validation using the Real-life trial dataset, the Silesian Deception dataset, and the Bag-of-lies Deception dataset.
- Score: 4.641678530055641
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Facts are important in decision making in every situation, which is why it is
important to catch deceptive information before they are accepted as facts.
Deception detection in videos has gained traction in recent times for its
various real-life application. In our approach, we extract facial action units
using the facial action coding system which we use as parameters for training a
deep learning model. We specifically use long short-term memory (LSTM) which we
trained using the real-life trial dataset and it provided one of the best
facial only approaches to deception detection. We also tested cross-dataset
validation using the Real-life trial dataset, the Silesian Deception Dataset,
and the Bag-of-lies Deception Dataset which has not yet been attempted by
anyone else for a deception detection system. We tested and compared all
datasets amongst each other individually and collectively using the same deep
learning training model. The results show that adding different datasets for
training worsen the accuracy of the model. One of the primary reasons is that
the nature of these datasets vastly differs from one another.
Related papers
- Corrective Machine Unlearning [22.342035149807923]
We formalize Corrective Machine Unlearning as the problem of mitigating the impact of data affected by unknown manipulations on a trained model.
We find most existing unlearning methods, including retraining-from-scratch without the deletion set, require most of the manipulated data to be identified for effective corrective unlearning.
One approach, Selective Synaptic Dampening, achieves limited success, unlearning adverse effects with just a small portion of the manipulated samples in our setting.
arXiv Detail & Related papers (2024-02-21T18:54:37Z) - Reinforcement Learning from Passive Data via Latent Intentions [86.4969514480008]
We show that passive data can still be used to learn features that accelerate downstream RL.
Our approach learns from passive data by modeling intentions.
Our experiments demonstrate the ability to learn from many forms of passive data, including cross-embodiment video data and YouTube videos.
arXiv Detail & Related papers (2023-04-10T17:59:05Z) - Multi-dataset Training of Transformers for Robust Action Recognition [75.5695991766902]
We study the task of robust feature representations, aiming to generalize well on multiple datasets for action recognition.
Here, we propose a novel multi-dataset training paradigm, MultiTrain, with the design of two new loss terms, namely informative loss and projection loss.
We verify the effectiveness of our method on five challenging datasets, Kinetics-400, Kinetics-700, Moments-in-Time, Activitynet and Something-something-v2.
arXiv Detail & Related papers (2022-09-26T01:30:43Z) - What Stops Learning-based 3D Registration from Working in the Real
World? [53.68326201131434]
This work identifies the sources of 3D point cloud registration failures, analyze the reasons behind them, and propose solutions.
Ultimately, this translates to a best-practice 3D registration network (BPNet), constituting the first learning-based method able to handle previously-unseen objects in real-world data.
Our model generalizes to real data without any fine-tuning, reaching an accuracy of up to 67% on point clouds of unseen objects obtained with a commercial sensor.
arXiv Detail & Related papers (2021-11-19T19:24:27Z) - Facial Emotion Recognition using Deep Residual Networks in Real-World
Environments [5.834678345946704]
We propose a facial feature extractor model trained on an in-the-wild and massively collected video dataset.
The dataset consists of a million labelled frames and 2,616 thousand subjects.
As temporal information is important to the emotion recognition domain, we utilise LSTM cells to capture the temporal dynamics in the data.
arXiv Detail & Related papers (2021-11-04T10:08:22Z) - Shuffled Patch-Wise Supervision for Presentation Attack Detection [12.031796234206135]
Face anti-spoofing is essential to prevent false facial verification by using a photo, video, mask, or a different substitute for an authorized person's face.
Most presentation attack detection systems suffer from overfitting, where they achieve near-perfect scores on a single dataset but fail on a different dataset with more realistic data.
We propose a new PAD approach, which combines pixel-wise binary supervision with patch-based CNN.
arXiv Detail & Related papers (2021-09-08T08:14:13Z) - SSSE: Efficiently Erasing Samples from Trained Machine Learning Models [103.43466657962242]
We propose an efficient and effective algorithm, SSSE, for samples erasure.
In certain cases SSSE can erase samples almost as well as the optimal, yet impractical, gold standard of training a new model from scratch with only the permitted data.
arXiv Detail & Related papers (2021-07-08T14:17:24Z) - Data Impressions: Mining Deep Models to Extract Samples for Data-free
Applications [26.48630545028405]
"Data Impressions" act as proxy to the training data and can be used to realize a variety of tasks.
We show the applicability of data impressions in solving several computer vision tasks.
arXiv Detail & Related papers (2021-01-15T11:37:29Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - Towards Universal Representation Learning for Deep Face Recognition [106.21744671876704]
We propose a universal representation learning framework that can deal with larger variation unseen in the given training data without leveraging target domain knowledge.
Experiments show that our method achieves top performance on general face recognition datasets such as LFW and MegaFace.
arXiv Detail & Related papers (2020-02-26T23:29:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.