Action Unit Detection with Joint Adaptive Attention and Graph Relation
- URL: http://arxiv.org/abs/2107.04389v1
- Date: Fri, 9 Jul 2021 12:33:38 GMT
- Title: Action Unit Detection with Joint Adaptive Attention and Graph Relation
- Authors: Chenggong Zhang and Juan Song and Qingyang Zhang and Weilong Dong and
Ruomeng Ding and Zhilei Liu
- Abstract summary: We present our submission to the Field Affective Behavior Analysis (ABAW) 2021 competition.
The proposed method uses the pre-trained JAA model as the feature extractor.
Our model achieves 0.674 on the challenging Aff-Wild2 database.
- Score: 3.98807633060402
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper describes an approach to the facial action unit (AU) detection. In
this work, we present our submission to the Field Affective Behavior Analysis
(ABAW) 2021 competition. The proposed method uses the pre-trained JAA model as
the feature extractor, and extracts global features, face alignment features
and AU local features on the basis of multi-scale features. We take the AU
local features as the input of the graph convolution to further consider the
correlation between AU, and finally use the fused features to classify AU. The
detected accuracy was evaluated by 0.5*accuracy + 0.5*F1. Our model achieves
0.674 on the challenging Aff-Wild2 database.
Related papers
- Facial Action Unit Detection by Adaptively Constraining Self-Attention and Causally Deconfounding Sample [53.23474626420103]
Facial action unit (AU) detection remains a challenging task, due to the subtlety, dynamics, and diversity of AUs.
We propose a novel AU detection framework called AC2D by adaptively constraining self-attention weight distribution.
Our method achieves competitive performance compared to state-of-the-art AU detection approaches on challenging benchmarks.
arXiv Detail & Related papers (2024-10-02T05:51:24Z) - Self-Training with Pseudo-Label Scorer for Aspect Sentiment Quad Prediction [54.23208041792073]
Aspect Sentiment Quad Prediction (ASQP) aims to predict all quads (aspect term, aspect category, opinion term, sentiment polarity) for a given review.
A key challenge in the ASQP task is the scarcity of labeled data, which limits the performance of existing methods.
We propose a self-training framework with a pseudo-label scorer, wherein a scorer assesses the match between reviews and their pseudo-labels.
arXiv Detail & Related papers (2024-06-26T05:30:21Z) - Keypoint Description by Symmetry Assessment -- Applications in
Biometrics [49.547569925407814]
We present a model-based feature extractor to describe neighborhoods around keypoints by finite expansion.
The iso-curves of such functions are highly symmetric w.r.t. the origin (a keypoint) and the estimated parameters have well defined geometric interpretations.
arXiv Detail & Related papers (2023-11-03T00:49:25Z) - Local Region Perception and Relationship Learning Combined with Feature
Fusion for Facial Action Unit Detection [12.677143408225167]
We introduce our submission to the CVPR 2023 Competition on Affective Behavior Analysis in-the-wild (ABAW)
We propose a single-stage trained AU detection framework. Specifically, in order to effectively extract facial local region features related to AU detection, we use a local region perception module.
We also use a graph neural network-based relational learning module to capture the relationship between AUs.
arXiv Detail & Related papers (2023-03-15T11:59:24Z) - Self-supervised Facial Action Unit Detection with Region and Relation
Learning [5.182661263082065]
We propose a novel self-supervised framework for AU detection with the region and relation learning.
An improved Optimal Transport (OT) algorithm is introduced to exploit the correlation characteristics among AUs.
Swin Transformer is exploited to model the long-distance dependencies within each AU region during feature learning.
arXiv Detail & Related papers (2023-03-10T05:22:45Z) - End-to-End Zero-Shot HOI Detection via Vision and Language Knowledge
Distillation [86.41437210485932]
We aim at advancing zero-shot HOI detection to detect both seen and unseen HOIs simultaneously.
We propose a novel end-to-end zero-shot HOI Detection framework via vision-language knowledge distillation.
Our method outperforms the previous SOTA by 8.92% on unseen mAP and 10.18% on overall mAP.
arXiv Detail & Related papers (2022-04-01T07:27:19Z) - An Attention-based Method for Action Unit Detection at the 3rd ABAW
Competition [6.229820412732652]
This paper describes our submission to the third Affective Behavior Analysis in-the-wild (ABAW) competition 2022.
We proposed a method for detecting facial action units in the video.
We achieved a macro F1 score of 0.48 on the ABAW challenge validation set compared to 0.39 from the baseline model.
arXiv Detail & Related papers (2022-03-23T14:07:39Z) - AutoAssign: Differentiable Label Assignment for Dense Object Detection [94.24431503373884]
Auto COCO is an anchor-free detector for object detection.
It achieves appearance-aware through a fully differentiable weighting mechanism.
Our best model achieves 52.1% AP, outperforming all existing one-stage detectors.
arXiv Detail & Related papers (2020-07-07T14:32:21Z) - J$\hat{\text{A}}$A-Net: Joint Facial Action Unit Detection and Face
Alignment via Adaptive Attention [57.51255553918323]
We propose a novel end-to-end deep learning framework for joint AU detection and face alignment.
Our framework significantly outperforms the state-of-the-art AU detection methods on the challenging BP4D, DISFA, GFT and BP4D+ benchmarks.
arXiv Detail & Related papers (2020-03-18T12:50:19Z) - Multi-label Relation Modeling in Facial Action Units Detection [32.27835075990971]
This paper describes an approach to the facial action units detections.
The involved action units (AU) include AU1 (Inner Brow Raiser), AU2 (Outer Brow Raiser), AU4 (Brow Lowerer), AU6 (Cheek Raise), AU12 (Lip Corner Puller), AU15 (Lip Corner Depressor), AU20 (Lip Stretcher), and AU25 (Lip Part)
arXiv Detail & Related papers (2020-02-04T03:33:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.