Face-GPS: A Comprehensive Technique for Quantifying Facial Muscle
Dynamics in Videos
- URL: http://arxiv.org/abs/2401.05625v1
- Date: Thu, 11 Jan 2024 02:32:17 GMT
- Title: Face-GPS: A Comprehensive Technique for Quantifying Facial Muscle
Dynamics in Videos
- Authors: Juni Kim, Zhikang Dong, Pawel Polak
- Abstract summary: We introduce a novel method that combines differential geometry, kernels smoothing, and spectral analysis to quantify facial muscle activity.
It has significant potential for applications in national security and plastic surgery.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a novel method that combines differential geometry, kernels
smoothing, and spectral analysis to quantify facial muscle activity from widely
accessible video recordings, such as those captured on personal smartphones.
Our approach emphasizes practicality and accessibility. It has significant
potential for applications in national security and plastic surgery.
Additionally, it offers remote diagnosis and monitoring for medical conditions
such as stroke, Bell's palsy, and acoustic neuroma. Moreover, it is adept at
detecting and classifying emotions, from the overt to the subtle. The proposed
face muscle analysis technique is an explainable alternative to deep learning
methods and a non-invasive substitute to facial electromyography (fEMG).
Related papers
- Electromyography-Informed Facial Expression Reconstruction for Physiological-Based Synthesis and Analysis [6.129164512102711]
The relationship between muscle activity and resulting facial expressions is crucial for various fields, including psychology, medicine, and entertainment.
Existing methods for facial analysis cannot handle electrode occlusion, rendering them ineffective.
Our approach is a novel method to restore faces under sEMG occlusion faithfully in an adversarial manner.
We validate the effectiveness of our approach through experiments on a dataset of synchronized sEMG recordings and facial mimicry.
arXiv Detail & Related papers (2025-03-12T17:21:10Z) - CFCPalsy: Facial Image Synthesis with Cross-Fusion Cycle Diffusion Model for Facial Paralysis Individuals [3.2688425993442696]
This study aims to synthesize a high-quality facial paralysis dataset to address this gap.
A novel Cross-Fusion Cycle Palsy Expression Generative Model (PalsyCFC) based on the diffusion model is proposed.
We have qualitatively and quantitatively evaluated the proposed method on the commonly used public clinical datasets of facial paralysis.
arXiv Detail & Related papers (2024-09-11T13:46:35Z) - Orientation-conditioned Facial Texture Mapping for Video-based Facial Remote Photoplethysmography Estimation [23.199005573530194]
We leverage the 3D facial surface to construct a novel orientation-conditioned video representation.
Our proposed method achieves a significant 18.2% performance improvement in cross-dataset testing on MMPD.
We demonstrate significant performance improvements of up to 29.6% in all tested motion scenarios.
arXiv Detail & Related papers (2024-04-14T23:30:35Z) - Multimodal Adaptive Fusion of Face and Gait Features using Keyless
attention based Deep Neural Networks for Human Identification [67.64124512185087]
Soft biometrics such as gait are widely used with face in surveillance tasks like person recognition and re-identification.
We propose a novel adaptive multi-biometric fusion strategy for the dynamic incorporation of gait and face biometric cues by leveraging keyless attention deep neural networks.
arXiv Detail & Related papers (2023-03-24T05:28:35Z) - Pose-Controllable 3D Facial Animation Synthesis using Hierarchical
Audio-Vertex Attention [52.63080543011595]
A novel pose-controllable 3D facial animation synthesis method is proposed by utilizing hierarchical audio-vertex attention.
The proposed method can produce more realistic facial expressions and head posture movements.
arXiv Detail & Related papers (2023-02-24T09:36:31Z) - fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships [68.8204255655161]
We present an interpretable domain grounded solution to recover the activity of several subcortical regions from multichannel EEG data.
We recover individual spatial and time-frequency patterns of scalp EEG predictive of the hemodynamic signal in the subcortical nuclei.
arXiv Detail & Related papers (2022-10-23T15:11:37Z) - Controllable Evaluation and Generation of Physical Adversarial Patch on
Face Recognition [49.42127182149948]
Recent studies have revealed the vulnerability of face recognition models against physical adversarial patches.
We propose to simulate the complex transformations of faces in the physical world via 3D-face modeling.
We further propose a Face3DAdv method considering the 3D face transformations and realistic physical variations.
arXiv Detail & Related papers (2022-03-09T10:21:40Z) - Neural Emotion Director: Speech-preserving semantic control of facial
expressions in "in-the-wild" videos [31.746152261362777]
We introduce a novel deep learning method for photo-realistic manipulation of the emotional state of actors in "in-the-wild" videos.
The proposed method is based on a parametric 3D face representation of the actor in the input scene that offers a reliable disentanglement of the facial identity from the head pose and facial expressions.
It then uses a novel deep domain translation framework that alters the facial expressions in a consistent and plausible manner, taking into account their dynamics.
arXiv Detail & Related papers (2021-12-01T15:55:04Z) - Preserving Privacy in Human-Motion Affect Recognition [4.753703852165805]
This work evaluates the effectiveness of existing methods at recognising emotions using both 3D temporal joint signals and manually extracted features.
We propose a cross-subject transfer learning technique for training a multi-encoder autoencoder deep neural network to learn disentangled latent representations of human motion features.
arXiv Detail & Related papers (2021-05-09T15:26:21Z) - I Only Have Eyes for You: The Impact of Masks On Convolutional-Based
Facial Expression Recognition [78.07239208222599]
We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks.
We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
arXiv Detail & Related papers (2021-04-16T20:03:30Z) - Emotion pattern detection on facial videos using functional statistics [62.997667081978825]
We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
arXiv Detail & Related papers (2021-03-01T08:31:08Z) - A Supervised Learning Approach for Robust Health Monitoring using Face
Videos [32.157163136267954]
Non-contact, device-free human sensing methods can eliminate the need for specialized heart and blood pressure monitoring equipment.
In this paper, we used a non-contact method that only requires face videos recorded using commercially-available webcams.
The proposed approach used facial recognition to detect the face in each frame of the video using facial landmarks.
arXiv Detail & Related papers (2021-01-30T22:03:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.