FED-PsyAU: Privacy-Preserving Micro-Expression Recognition via Psychological AU Coordination and Dynamic Facial Motion Modeling
- URL: http://arxiv.org/abs/2507.20557v1
- Date: Mon, 28 Jul 2025 06:42:15 GMT
- Title: FED-PsyAU: Privacy-Preserving Micro-Expression Recognition via Psychological AU Coordination and Dynamic Facial Motion Modeling
- Authors: Jingting Li, Yu Qian, Lin Zhao, Su-Jing Wang,
- Abstract summary: Micro-expressions (MEs) are brief, low-intensity, often localized facial expressions.<n>They could reveal genuine emotions individuals may attempt to conceal, valuable in contexts like criminal interrogation and psychological counseling.<n>ME recognition (MER) faces challenges, such as small sample sizes and subtle features, which hinder efficient modeling.
- Score: 14.30453182275959
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Micro-expressions (MEs) are brief, low-intensity, often localized facial expressions. They could reveal genuine emotions individuals may attempt to conceal, valuable in contexts like criminal interrogation and psychological counseling. However, ME recognition (MER) faces challenges, such as small sample sizes and subtle features, which hinder efficient modeling. Additionally, real-world applications encounter ME data privacy issues, leaving the task of enhancing recognition across settings under privacy constraints largely unexplored. To address these issues, we propose a FED-PsyAU research framework. We begin with a psychological study on the coordination of upper and lower facial action units (AUs) to provide structured prior knowledge of facial muscle dynamics. We then develop a DPK-GAT network that combines these psychological priors with statistical AU patterns, enabling hierarchical learning of facial motion features from regional to global levels, effectively enhancing MER performance. Additionally, our federated learning framework advances MER capabilities across multiple clients without data sharing, preserving privacy and alleviating the limited-sample issue for each client. Extensive experiments on commonly-used ME databases demonstrate the effectiveness of our approach.
Related papers
- FSFM: A Generalizable Face Security Foundation Model via Self-Supervised Facial Representation Learning [27.34249750803211]
We propose a self-supervised pretraining framework to learn fundamental representations of real face images.<n>Our model transfers better than supervised pretraining, visual and facial self-supervised learning arts, and even outperforms task-specialized SOTA methods.
arXiv Detail & Related papers (2024-12-16T17:58:45Z) - EMO-LLaMA: Enhancing Facial Emotion Understanding with Instruction Tuning [27.790079451103065]
We propose a novel MLLM, named EMO-LLaMA, which incorporates facial priors from a pretrained facial analysis network to enhance human facial information.
EMO-LLaMA achieves SOTA-comparable or competitive results across both static and dynamic FER datasets.
arXiv Detail & Related papers (2024-08-21T08:28:40Z) - AdaFedFR: Federated Face Recognition with Adaptive Inter-Class Representation Learning [14.139432393751298]
We propose a simple yet effective federated face recognition framework called AdaFedFR.
Our work delicately utilizes feature representations of public identities as learnable negative knowledge to optimize the local objective.
Experimental results demonstrate that our method outperforms previous approaches on several prevalent face recognition benchmarks within less than 3 communication rounds.
arXiv Detail & Related papers (2024-05-22T09:19:25Z) - Facial Affective Behavior Analysis with Instruction Tuning [58.332959295770614]
Facial affective behavior analysis (FABA) is crucial for understanding human mental states from images.
Traditional approaches primarily deploy models to discriminate among discrete emotion categories, and lack the fine granularity and reasoning capability for complex facial behaviors.
We introduce an instruction-following dataset for two FABA tasks, emotion and action unit recognition, and a benchmark FABA-Bench with a new metric considering both recognition and generation ability.
We also introduce a facial prior expert module with face structure knowledge and a low-rank adaptation module into pre-trained MLLM.
arXiv Detail & Related papers (2024-04-07T19:23:28Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Towards Privacy-Preserving Affect Recognition: A Two-Level Deep Learning
Architecture [2.9392867898439006]
We propose a two-level deep learning architecture for affect recognition.
The architecture consists of recurrent neural networks to capture the temporal relationships amongst the features.
arXiv Detail & Related papers (2021-11-14T13:52:57Z) - A Multi-resolution Approach to Expression Recognition in the Wild [9.118706387430883]
We propose a multi-resolution approach to solve the Facial Expression Recognition task.
We ground our intuition on the observation that often faces images are acquired at different resolutions.
To our aim, we use a ResNet-like architecture, equipped with Squeeze-and-Excitation blocks, trained on the Affect-in-the-Wild 2 dataset.
arXiv Detail & Related papers (2021-03-09T21:21:02Z) - Learning Emotional-Blinded Face Representations [77.7653702071127]
We propose two face representations that are blind to facial expressions associated to emotional responses.
This work is motivated by new international regulations for personal data protection.
arXiv Detail & Related papers (2020-09-18T09:24:10Z) - Micro-expression spotting: A new benchmark [74.69928316848866]
Micro-expressions (MEs) are brief and involuntary facial expressions that occur when people are trying to hide their true feelings or conceal their emotions.
In the computer vision field, the study of MEs can be divided into two main tasks, spotting and recognition.
This paper introduces an extension of the SMIC-E database, namely the SMIC-E-Long database, which is a new challenging benchmark for ME spotting.
arXiv Detail & Related papers (2020-07-24T09:18:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.