How Facial Features Convey Attention in Stationary Environments
- URL: http://arxiv.org/abs/2111.14931v1
- Date: Mon, 29 Nov 2021 20:11:57 GMT
- Title: How Facial Features Convey Attention in Stationary Environments
- Authors: Janelle Domantay
- Abstract summary: This paper aims to extend previous research on distraction detection by analyzing which visual features contribute most to predicting awareness and fatigue.
We utilized the open source facial analysis toolkit OpenFace in order to analyze visual data of subjects at varying levels of attentiveness.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Awareness detection technologies have been gaining traction in a variety of
enterprises; most often used for driver fatigue detection, recent research has
shifted towards using computer vision technologies to analyze user attention in
environments such as online classrooms. This paper aims to extend previous
research on distraction detection by analyzing which visual features contribute
most to predicting awareness and fatigue. We utilized the open source facial
analysis toolkit OpenFace in order to analyze visual data of subjects at
varying levels of attentiveness. Then, using a Support-Vector Machine (SVM) we
created several prediction models for user attention and identified Histogram
of Oriented Gradients (HOG) and Action Units to be the greatest predictors of
the features we tested. We also compared the performance of this SVM to deep
learning approaches that utilize Convolutional and/or Recurrent neural networks
(CNN's and CRNN's). Interestingly, CRNN's did not appear to perform
significantly better than their CNN counterparts. While deep learning methods
achieved greater prediction accuracy, SVMs utilized less resources and, using
certain parameters, were able to approach the performance of deep learning
methods.
Related papers
- Underwater Object Detection in the Era of Artificial Intelligence: Current, Challenge, and Future [119.88454942558485]
Underwater object detection (UOD) aims to identify and localise objects in underwater images or videos.
In recent years, artificial intelligence (AI) based methods, especially deep learning methods, have shown promising performance in UOD.
arXiv Detail & Related papers (2024-10-08T00:25:33Z) - A Critical Analysis on Machine Learning Techniques for Video-based Human Activity Recognition of Surveillance Systems: A Review [1.3693860189056777]
Upsurging abnormal activities in crowded locations urges the necessity for an intelligent surveillance system.
Video-based human activity recognition has intrigued many researchers with its pressing issues.
This paper provides a critical survey of video-based Human Activity Recognition (HAR) techniques.
arXiv Detail & Related papers (2024-09-01T14:43:57Z) - UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Comparative Analysis of Predicting Subsequent Steps in Hénon Map [0.0]
This study evaluates the performance of different machine learning models in predicting the evolution of the H'enon map.
Results indicate that LSTM network demonstrate superior predictive accuracy, particularly in extreme event prediction.
This research underscores the significance of machine learning in elucidating chaotic dynamics.
arXiv Detail & Related papers (2024-05-15T17:32:31Z) - Comprehensive evaluation of Mal-API-2019 dataset by machine learning in malware detection [0.5475886285082937]
This study conducts a thorough examination of malware detection using machine learning techniques.
The aim is to advance cybersecurity capabilities by identifying and mitigating threats more effectively.
arXiv Detail & Related papers (2024-03-04T17:22:43Z) - What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - Backdoor Attack Detection in Computer Vision by Applying Matrix
Factorization on the Weights of Deep Networks [6.44397009982949]
We introduce a novel method for backdoor detection that extracts features from pre-trained DNN's weights.
In comparison to other detection techniques, this has a number of benefits, such as not requiring any training data.
Our method outperforms the competing algorithms in terms of efficiency and is more accurate, helping to ensure the safe application of deep learning and AI.
arXiv Detail & Related papers (2022-12-15T20:20:18Z) - Initial Study into Application of Feature Density and
Linguistically-backed Embedding to Improve Machine Learning-based
Cyberbullying Detection [54.83707803301847]
The research was conducted on a Formspring dataset provided in a Kaggle competition on automatic cyberbullying detection.
The study confirmed the effectiveness of Neural Networks in cyberbullying detection and the correlation between classifier performance and Feature Density.
arXiv Detail & Related papers (2022-06-04T03:17:15Z) - Neural Networks for Semantic Gaze Analysis in XR Settings [0.0]
We present a novel approach which minimizes time and information necessary to annotate volumes of interest.
We train convolutional neural networks (CNNs) on synthetic data sets derived from virtual models using image augmentation techniques.
We evaluate our method in real and virtual environments, showing that the method can compete with state-of-the-art approaches.
arXiv Detail & Related papers (2021-03-18T18:05:01Z) - Variational Structured Attention Networks for Deep Visual Representation
Learning [49.80498066480928]
We propose a unified deep framework to jointly learn both spatial attention maps and channel attention in a principled manner.
Specifically, we integrate the estimation and the interaction of the attentions within a probabilistic representation learning framework.
We implement the inference rules within the neural network, thus allowing for end-to-end learning of the probabilistic and the CNN front-end parameters.
arXiv Detail & Related papers (2021-03-05T07:37:24Z) - The FaceChannel: A Fast & Furious Deep Neural Network for Facial
Expression Recognition [71.24825724518847]
Current state-of-the-art models for automatic Facial Expression Recognition (FER) are based on very deep neural networks that are effective but rather expensive to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how our model achieves a comparable, if not better, performance to the current state-of-the-art in FER.
arXiv Detail & Related papers (2020-09-15T09:25:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.