Py-Feat: Python Facial Expression Analysis Toolbox
- URL: http://arxiv.org/abs/2104.03509v1
- Date: Thu, 8 Apr 2021 04:52:21 GMT
- Title: Py-Feat: Python Facial Expression Analysis Toolbox
- Authors: Jin Hyun Cheong, Tiankang Xie, Sophie Byrne, Luke J. Chang
- Abstract summary: We introduce Py-Feat, an open-source Python toolbox that provides support for detecting, preprocessing, analyzing, and visualizing facial expression data.
We hope this platform will facilitate increased use of facial expression data in human behavior research.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Studying facial expressions is a notoriously difficult endeavor. Recent
advances in the field of affective computing have yielded impressive progress
in automatically detecting facial expressions from pictures and videos.
However, much of this work has yet to be widely disseminated in social science
domains such as psychology. Current state of the art models require
considerable domain expertise that is not traditionally incorporated into
social science training programs. Furthermore, there is a notable absence of
user-friendly and open-source software that provides a comprehensive set of
tools and functions that support facial expression research. In this paper, we
introduce Py-Feat, an open-source Python toolbox that provides support for
detecting, preprocessing, analyzing, and visualizing facial expression data.
Py-Feat makes it easy for domain experts to disseminate and benchmark computer
vision models and also for end users to quickly process, analyze, and visualize
face expression data. We hope this platform will facilitate increased use of
facial expression data in human behavior research.
Related papers
- psifx -- Psychological and Social Interactions Feature Extraction Package [3.560429497877327]
psifx is a plug-and-play multi-modal feature extraction toolkit.
It aims to facilitate and democratize the use of state-of-the-art machine learning techniques for human sciences research.
arXiv Detail & Related papers (2024-07-14T16:20:42Z) - Computer Vision for Primate Behavior Analysis in the Wild [61.08941894580172]
Video-based behavioral monitoring has great potential for transforming how we study animal cognition and behavior.
There is still a fairly large gap between the exciting prospects and what can actually be achieved in practice today.
arXiv Detail & Related papers (2024-01-29T18:59:56Z) - What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - LibreFace: An Open-Source Toolkit for Deep Facial Expression Analysis [7.185007035384591]
We introduce LibreFace, an open-source toolkit for facial expression analysis.
It offers real-time and offline analysis of facial behavior through deep learning models.
Our model also demonstrates competitive performance to state-of-the-art facial expression analysis methods.
arXiv Detail & Related papers (2023-08-18T00:33:29Z) - Muscle Vision: Real Time Keypoint Based Pose Classification of Physical
Exercises [52.77024349608834]
3D human pose recognition extrapolated from video has advanced to the point of enabling real-time software applications.
We propose a new machine learning pipeline and web interface that performs human pose recognition on a live video feed to detect when common exercises are performed and classify them accordingly.
arXiv Detail & Related papers (2022-03-23T00:55:07Z) - Towards a General Deep Feature Extractor for Facial Expression
Recognition [5.012963825796511]
We propose a new deep learning-based approach that learns a visual feature extractor general enough to be applied to any other facial emotion recognition task or dataset.
DeepFEVER outperforms state-of-the-art results on the AffectNet and Google Facial Expression Comparison datasets.
arXiv Detail & Related papers (2022-01-19T18:42:23Z) - Pre-training strategies and datasets for facial representation learning [58.8289362536262]
We show how to find a universal face representation that can be adapted to several facial analysis tasks and datasets.
We systematically investigate two ways of large-scale representation learning applied to faces: supervised and unsupervised pre-training.
Our main two findings are: Unsupervised pre-training on completely in-the-wild, uncurated data provides consistent and, in some cases, significant accuracy improvements.
arXiv Detail & Related papers (2021-03-30T17:57:25Z) - FaceX-Zoo: A PyTorch Toolbox for Face Recognition [62.038018324643325]
We introduce a novel open-source framework, named FaceX-Zoo, which is oriented to the research-development community of face recognition.
FaceX-Zoo provides a training module with various supervisory heads and backbones towards state-of-the-art face recognition.
A simple yet fully functional face SDK is provided for the validation and primary application of the trained models.
arXiv Detail & Related papers (2021-01-12T11:06:50Z) - Emotion Detection using Image Processing in Python [0.6604761303853881]
The work has been implemented using Python (2.7, Open Source Computer Vision Library (OpenCV) and NumPy.
The objective of this paper is to develop a system which can analyze the image and predict the expression of the person.
arXiv Detail & Related papers (2020-12-01T17:34:35Z) - Learning to Augment Expressions for Few-shot Fine-grained Facial
Expression Recognition [98.83578105374535]
We present a novel Fine-grained Facial Expression Database - F2ED.
It includes more than 200k images with 54 facial expressions from 119 persons.
Considering the phenomenon of uneven data distribution and lack of samples is common in real-world scenarios, we evaluate several tasks of few-shot expression learning.
We propose a unified task-driven framework - Compositional Generative Adversarial Network (Comp-GAN) learning to synthesize facial images.
arXiv Detail & Related papers (2020-01-17T03:26:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.