Multi-label Relation Modeling in Facial Action Units Detection
- URL: http://arxiv.org/abs/2002.01105v2
- Date: Sat, 8 Feb 2020 10:39:30 GMT
- Title: Multi-label Relation Modeling in Facial Action Units Detection
- Authors: Xianpeng Ji, Yu Ding, Lincheng Li, Yu Chen, Changjie Fan
- Abstract summary: This paper describes an approach to the facial action units detections.
The involved action units (AU) include AU1 (Inner Brow Raiser), AU2 (Outer Brow Raiser), AU4 (Brow Lowerer), AU6 (Cheek Raise), AU12 (Lip Corner Puller), AU15 (Lip Corner Depressor), AU20 (Lip Stretcher), and AU25 (Lip Part)
- Score: 32.27835075990971
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper describes an approach to the facial action units detections. The
involved action units (AU) include AU1 (Inner Brow Raiser), AU2 (Outer Brow
Raiser), AU4 (Brow Lowerer), AU6 (Cheek Raise), AU12 (Lip Corner Puller), AU15
(Lip Corner Depressor), AU20 (Lip Stretcher), and AU25 (Lip Part). Our work
relies on the dataset released by the FG-2020 Competition: Affective Behavior
Analysis In-the-Wild (ABAW). The proposed method consists of the data
preprocessing, the feature extraction and the AU classification. The data
preprocessing includes the detection of face texture and landmarks. The texture
static and landmark dynamic features are extracted through neural networks and
then fused as the feature latent representation. Finally, the fused feature is
taken as the initial hidden state of a recurrent neural network with a
trainable lookup AU table. The output of the RNN is the results of AU
classification. The detected accuracy is evaluated with 0.5$\times$accuracy +
0.5$\times$F1. Our method achieve 0.56 with the validation data that is
specified by the organization committee.
Related papers
- Facial Action Unit Detection by Adaptively Constraining Self-Attention and Causally Deconfounding Sample [53.23474626420103]
Facial action unit (AU) detection remains a challenging task, due to the subtlety, dynamics, and diversity of AUs.
We propose a novel AU detection framework called AC2D by adaptively constraining self-attention weight distribution.
Our method achieves competitive performance compared to state-of-the-art AU detection approaches on challenging benchmarks.
arXiv Detail & Related papers (2024-10-02T05:51:24Z) - SIRST-5K: Exploring Massive Negatives Synthesis with Self-supervised
Learning for Robust Infrared Small Target Detection [53.19618419772467]
Single-frame infrared small target (SIRST) detection aims to recognize small targets from clutter backgrounds.
With the development of Transformer, the scale of SIRST models is constantly increasing.
With a rich diversity of infrared small target data, our algorithm significantly improves the model performance and convergence speed.
arXiv Detail & Related papers (2024-03-08T16:14:54Z) - FG-Net: Facial Action Unit Detection with Generalizable Pyramidal
Features [13.176011491885664]
Previous AU detection methods tend to overfit the dataset, resulting in a significant performance loss when evaluated across corpora.
We propose FG-Net for generalizable facial action unit detection.
Specifically, FG-Net extracts feature maps from a StyleGAN2 model pre-trained on a large and diverse face image dataset.
arXiv Detail & Related papers (2023-08-23T18:51:11Z) - FAN-Trans: Online Knowledge Distillation for Facial Action Unit
Detection [45.688712067285536]
Leveraging the online knowledge distillation framework, we propose the FANTrans" method for AU detection.
Our model consists of a hybrid network of convolution and transformer blocks to learn per-AU features and to model AU co-occurrences.
arXiv Detail & Related papers (2022-11-11T11:35:33Z) - AU-Supervised Convolutional Vision Transformers for Synthetic Facial
Expression Recognition [12.661683851729679]
The paper describes our proposed methodology for the six basic expression classification track of Affective Behavior Analysis in-the-wild (ABAW) Competition 2022.
Because of the ambiguous of the synthetic data and the objectivity of the facial Action Unit (AU), we resort to the AU information for performance boosting.
arXiv Detail & Related papers (2022-07-20T09:33:39Z) - An Attention-based Method for Action Unit Detection at the 3rd ABAW
Competition [6.229820412732652]
This paper describes our submission to the third Affective Behavior Analysis in-the-wild (ABAW) competition 2022.
We proposed a method for detecting facial action units in the video.
We achieved a macro F1 score of 0.48 on the ABAW challenge validation set compared to 0.39 from the baseline model.
arXiv Detail & Related papers (2022-03-23T14:07:39Z) - Action Unit Detection with Joint Adaptive Attention and Graph Relation [3.98807633060402]
We present our submission to the Field Affective Behavior Analysis (ABAW) 2021 competition.
The proposed method uses the pre-trained JAA model as the feature extractor.
Our model achieves 0.674 on the challenging Aff-Wild2 database.
arXiv Detail & Related papers (2021-07-09T12:33:38Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - J$\hat{\text{A}}$A-Net: Joint Facial Action Unit Detection and Face
Alignment via Adaptive Attention [57.51255553918323]
We propose a novel end-to-end deep learning framework for joint AU detection and face alignment.
Our framework significantly outperforms the state-of-the-art AU detection methods on the challenging BP4D, DISFA, GFT and BP4D+ benchmarks.
arXiv Detail & Related papers (2020-03-18T12:50:19Z) - High-Order Information Matters: Learning Relation and Topology for
Occluded Person Re-Identification [84.43394420267794]
We propose a novel framework by learning high-order relation and topology information for discriminative features and robust alignment.
Our framework significantly outperforms state-of-the-art by6.5%mAP scores on Occluded-Duke dataset.
arXiv Detail & Related papers (2020-03-18T12:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.