Magnifying Subtle Facial Motions for Effective 4D Expression Recognition
- URL: http://arxiv.org/abs/2105.02319v1
- Date: Wed, 5 May 2021 20:47:43 GMT
- Title: Magnifying Subtle Facial Motions for Effective 4D Expression Recognition
- Authors: Qingkai Zhen, Di Huang, Yunhong Wang, Hassen Drira, Boulbaba Ben Amor,
Mohamed Daoudi
- Abstract summary: The flow of 3D faces is first analyzed to capture the spatial deformations.
The obtained temporal evolution of these deformations are fed into a magnification method.
The latter, main contribution of this paper, allows revealing subtle (hidden) deformations which enhance the emotion classification performance.
- Score: 56.806738404887824
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, an effective pipeline to automatic 4D Facial Expression
Recognition (4D FER) is proposed. It combines two growing but disparate ideas
in Computer Vision -- computing the spatial facial deformations using tools
from Riemannian geometry and magnifying them using temporal filtering. The flow
of 3D faces is first analyzed to capture the spatial deformations based on the
recently-developed Riemannian approach, where registration and comparison of
neighboring 3D faces are led jointly. Then, the obtained temporal evolution of
these deformations are fed into a magnification method in order to amplify the
facial activities over the time. The latter, main contribution of this paper,
allows revealing subtle (hidden) deformations which enhance the emotion
classification performance. We evaluated our approach on BU-4DFE dataset, the
state-of-art 94.18% average performance and an improvement that exceeds 10% in
classification accuracy, after magnifying extracted geometric features
(deformations), are achieved.
Related papers
- Ig3D: Integrating 3D Face Representations in Facial Expression Inference [12.975434103690812]
This study aims to investigate the impacts of integrating 3D representations into the facial expression inference task.
We first assess the performance of two 3D face representations (both based on the 3D morphable model, FLAME) for the FEI tasks.
We then explore two fusion architectures, intermediate fusion and late fusion, for integrating the 3D face representations with existing 2D inference frameworks.
Our proposed method outperforms the state-of-the-art AffectNet VA estimation and RAF-DB classification tasks.
arXiv Detail & Related papers (2024-08-29T21:08:07Z) - AnimateMe: 4D Facial Expressions via Diffusion Models [72.63383191654357]
Recent advances in diffusion models have enhanced the capabilities of generative models in 2D animation.
We employ Graph Neural Networks (GNNs) as denoising diffusion models in a novel approach, formulating the diffusion process directly on the mesh space.
This facilitates the generation of facial deformations through a mesh-diffusion-based model.
arXiv Detail & Related papers (2024-03-25T21:40:44Z) - Semantic-aware One-shot Face Re-enactment with Dense Correspondence
Estimation [100.60938767993088]
One-shot face re-enactment is a challenging task due to the identity mismatch between source and driving faces.
This paper proposes to use 3D Morphable Model (3DMM) for explicit facial semantic decomposition and identity disentanglement.
arXiv Detail & Related papers (2022-11-23T03:02:34Z) - LoRD: Local 4D Implicit Representation for High-Fidelity Dynamic Human
Modeling [69.56581851211841]
We propose a novel Local 4D implicit Representation for Dynamic clothed human, named LoRD.
Our key insight is to encourage the network to learn the latent codes of local part-level representation.
LoRD has strong capability for representing 4D human, and outperforms state-of-the-art methods on practical applications.
arXiv Detail & Related papers (2022-08-18T03:49:44Z) - Implicit Neural Deformation for Multi-View Face Reconstruction [43.88676778013593]
We present a new method for 3D face reconstruction from multi-view RGB images.
Unlike previous methods which are built upon 3D morphable models, our method leverages an implicit representation to encode rich geometric features.
Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-05T07:02:53Z) - Learning Parallel Dense Correspondence from Spatio-Temporal Descriptors
for Efficient and Robust 4D Reconstruction [43.60322886598972]
This paper focuses on the task of 4D shape reconstruction from a sequence of point clouds.
We present a novel pipeline to learn a temporal evolution of the 3D human shape through capturing continuous transformation functions among cross-frame occupancy fields.
arXiv Detail & Related papers (2021-03-30T13:36:03Z) - Deep learning with 4D spatio-temporal data representations for OCT-based
force estimation [59.405210617831656]
We extend the problem of deep learning-based force estimation to 4D volumetric-temporal data with streams of 3D OCT volumes.
We show that using 4Dterm-temporal data outperforms all previously used data representations with a mean absolute error of 10.7mN.
arXiv Detail & Related papers (2020-05-20T13:30:36Z) - Towards Reading Beyond Faces for Sparsity-Aware 4D Affect Recognition [55.15661254072032]
We present a sparsity-aware deep network for automatic 4D facial expression recognition (FER)
We first propose a novel augmentation method to combat the data limitation problem for deep learning.
We then present a sparsity-aware deep network to compute the sparse representations of convolutional features over multi-views.
arXiv Detail & Related papers (2020-02-08T13:09:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.