A Dynamic 3D Spontaneous Micro-expression Database: Establishment and
Evaluation
- URL: http://arxiv.org/abs/2108.00166v1
- Date: Sat, 31 Jul 2021 07:04:16 GMT
- Title: A Dynamic 3D Spontaneous Micro-expression Database: Establishment and
Evaluation
- Authors: Fengping Wang, Jie Li, Chun Qi, Yun Zhang, Danmin Miao
- Abstract summary: Micro-expressions are spontaneous, unconscious facial movements that show people's true inner emotions.
The occurrence of an expression can arouse spatial deformation of the face.
We propose a new micro-expression database containing 2D video sequences and 3D point clouds sequences.
- Score: 14.994232615123337
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Micro-expressions are spontaneous, unconscious facial movements that show
people's true inner emotions and have great potential in related fields of
psychological testing. Since the face is a 3D deformation object, the
occurrence of an expression can arouse spatial deformation of the face, but
limited by the available databases are 2D videos, which lack the description of
3D spatial information of micro-expressions. Therefore, we proposed a new
micro-expression database containing 2D video sequences and 3D point clouds
sequences. The database includes 259 micro-expressions sequences, and these
samples were classified using the objective method based on facial action
coding system, as well as the non-objective method that combines video contents
and participants' self-reports. We extracted facial 2D and 3D features using
local binary patterns on three orthogonal planes and curvature descriptors,
respectively, and performed baseline evaluations of the two features and their
fusion results with leave-one-subject-out(LOSO) and 10-fold cross-validation
methods. The best fusion performances were 58.84% and 73.03% for non-objective
classification and 66.36% and 77.42% for objective classification, both of
which have improved performance compared to using LBP-TOP features only.The
database offers original and cropped micro-expression samples, which will
facilitate the exploration and research on 3D Spatio-temporal features of
micro-expressions.
Related papers
- 2D or not 2D: How Does the Dimensionality of Gesture Representation Affect 3D Co-Speech Gesture Generation? [5.408549711581793]
We study the effect of using either 2D or 3D joint coordinates as training data on the performance of speech-to-gesture deep generative models.
We employ a lifting model for converting generated 2D pose sequences into 3D and assess how gestures created directly in 3D stack up against those initially generated in 2D and then converted to 3D.
arXiv Detail & Related papers (2024-09-16T15:06:12Z) - Ig3D: Integrating 3D Face Representations in Facial Expression Inference [12.975434103690812]
This study aims to investigate the impacts of integrating 3D representations into the facial expression inference task.
We first assess the performance of two 3D face representations (both based on the 3D morphable model, FLAME) for the FEI tasks.
We then explore two fusion architectures, intermediate fusion and late fusion, for integrating the 3D face representations with existing 2D inference frameworks.
Our proposed method outperforms the state-of-the-art AffectNet VA estimation and RAF-DB classification tasks.
arXiv Detail & Related papers (2024-08-29T21:08:07Z) - Memorize What Matters: Emergent Scene Decomposition from Multitraverse [54.487589469432706]
We introduce 3D Gaussian Mapping, a camera-only offline mapping framework grounded in 3D Gaussian Splatting.
3DGM converts multitraverse RGB videos from the same region into a Gaussian-based environmental map while concurrently performing 2D ephemeral object segmentation.
We build the Mapverse benchmark, sourced from the Ithaca365 and nuPlan datasets, to evaluate our method in unsupervised 2D segmentation, 3D reconstruction, and neural rendering.
arXiv Detail & Related papers (2024-05-27T14:11:17Z) - ID-to-3D: Expressive ID-guided 3D Heads via Score Distillation Sampling [96.87575334960258]
ID-to-3D is a method to generate identity- and text-guided 3D human heads with disentangled expressions.
Results achieve an unprecedented level of identity-consistent and high-quality texture and geometry generation.
arXiv Detail & Related papers (2024-05-26T13:36:45Z) - Decaf: Monocular Deformation Capture for Face and Hand Interactions [77.75726740605748]
This paper introduces the first method that allows tracking human hands interacting with human faces in 3D from single monocular RGB videos.
We model hands as articulated objects inducing non-rigid face deformations during an active interaction.
Our method relies on a new hand-face motion and interaction capture dataset with realistic face deformations acquired with a markerless multi-view camera system.
arXiv Detail & Related papers (2023-09-28T17:59:51Z) - Multi-scale multi-modal micro-expression recognition algorithm based on
transformer [17.980579727286518]
A micro-expression is a spontaneous unconscious facial muscle movement that can reveal the true emotions people attempt to hide.
We propose a multi-modal multi-scale algorithm based on transformer network to learn local multi-grained features of micro-expressions.
The results show the accuracy of the proposed algorithm in single measurement SMIC database is up to 78.73% and the F1 value on CASMEII of the combined database is up to 0.9071.
arXiv Detail & Related papers (2023-01-08T03:45:23Z) - Video-based Facial Micro-Expression Analysis: A Survey of Datasets,
Features and Algorithms [52.58031087639394]
micro-expressions are involuntary and transient facial expressions.
They can provide important information in a broad range of applications such as lie detection, criminal detection, etc.
Since micro-expressions are transient and of low intensity, their detection and recognition is difficult and relies heavily on expert experiences.
arXiv Detail & Related papers (2022-01-30T05:14:13Z) - MMNet: Muscle motion-guided network for micro-expression recognition [2.032432845751978]
We propose a robust micro-expression recognition framework, namely muscle motion-guided network (MMNet)
Specifically, a continuous attention (CA) block is introduced to focus on modeling local subtle muscle motion patterns with little identity information.
Our approach outperforms state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2022-01-14T04:05:49Z) - DensePose 3D: Lifting Canonical Surface Maps of Articulated Objects to
the Third Dimension [71.71234436165255]
We contribute DensePose 3D, a method that can learn such reconstructions in a weakly supervised fashion from 2D image annotations only.
Because it does not require 3D scans, DensePose 3D can be used for learning a wide range of articulated categories such as different animal species.
We show significant improvements compared to state-of-the-art non-rigid structure-from-motion baselines on both synthetic and real data on categories of humans and animals.
arXiv Detail & Related papers (2021-08-31T18:33:55Z) - MERANet: Facial Micro-Expression Recognition using 3D Residual Attention
Network [14.285700243381537]
We propose a facial-expression recognition model using 3D attention called MERANet.
The proposed model also encompasses both spatial and temporal information.
A superior performance is observed as compared to the state-of-the-art for facial micro-expression recognition.
arXiv Detail & Related papers (2020-12-07T16:41:42Z) - Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic
Segmentation [87.54570024320354]
State-of-the-art methods for large-scale driving-scene LiDAR semantic segmentation often project and process the point clouds in the 2D space.
A straightforward solution to tackle the issue of 3D-to-2D projection is to keep the 3D representation and process the points in the 3D space.
We develop a 3D cylinder partition and a 3D cylinder convolution based framework, termed as Cylinder3D, which exploits the 3D topology relations and structures of driving-scene point clouds.
arXiv Detail & Related papers (2020-08-04T13:56:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.