Facial Action Unit Recognition With Multi-models Ensembling
- URL: http://arxiv.org/abs/2203.13046v1
- Date: Thu, 24 Mar 2022 12:50:02 GMT
- Title: Facial Action Unit Recognition With Multi-models Ensembling
- Authors: Wenqiang Jiang, Yannan Wu, Fengsheng Qiao, Liyu Meng, Yuanyuan Deng,
Chuanhe Liu
- Abstract summary: We present our method of Affective Behavior Analysis in-the-wild (ABAW) 2022 Competition.
We use improved IResnet100 as backbone. Then we train AU dataset in Aff-Wild2 on three pertained models pretrained by our private au and expression dataset, and Glint360K respectively.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Affective Behavior Analysis in-the-wild (ABAW) 2022 Competition gives
Affective Computing a large promotion. In this paper, we present our method of
AU challenge in this Competition. We use improved IResnet100 as backbone. Then
we train AU dataset in Aff-Wild2 on three pertained models pretrained by our
private au and expression dataset, and Glint360K respectively. Finally, we
ensemble the results of our models. We achieved F1 score (macro) 0.731 on AU
validation set.
Related papers
- Improving Generalization of Alignment with Human Preferences through
Group Invariant Learning [56.19242260613749]
Reinforcement Learning from Human Feedback (RLHF) enables the generation of responses more aligned with human preferences.
Previous work shows that Reinforcement Learning (RL) often exploits shortcuts to attain high rewards and overlooks challenging samples.
We propose a novel approach that can learn a consistent policy via RL across various data groups or domains.
arXiv Detail & Related papers (2023-10-18T13:54:15Z) - Multi-modal Facial Affective Analysis based on Masked Autoencoder [7.17338843593134]
We introduce our submission to the CVPR 2023: ABAW5 competition: Affective Behavior Analysis in-the-wild.
Our approach involves several key components. First, we utilize the visual information from a Masked Autoencoder(MAE) model that has been pre-trained on a large-scale face image dataset in a self-supervised manner.
Our approach achieves impressive results in the ABAW5 competition, with an average F1 score of 55.49% and 41.21% in the AU and EXPR tracks, respectively.
arXiv Detail & Related papers (2023-03-20T03:58:03Z) - AU-Supervised Convolutional Vision Transformers for Synthetic Facial
Expression Recognition [12.661683851729679]
The paper describes our proposed methodology for the six basic expression classification track of Affective Behavior Analysis in-the-wild (ABAW) Competition 2022.
Because of the ambiguous of the synthetic data and the objectivity of the facial Action Unit (AU), we resort to the AU information for performance boosting.
arXiv Detail & Related papers (2022-07-20T09:33:39Z) - MSeg: A Composite Dataset for Multi-domain Semantic Segmentation [100.17755160696939]
We present MSeg, a composite dataset that unifies semantic segmentation datasets from different domains.
We reconcile the generalization and bring the pixel-level annotations into alignment by relabeling more than 220,000 object masks in more than 80,000 images.
A model trained on MSeg ranks first on the WildDash-v1 leaderboard for robust semantic segmentation, with no exposure to WildDash data during training.
arXiv Detail & Related papers (2021-12-27T16:16:35Z) - Action Unit Detection with Joint Adaptive Attention and Graph Relation [3.98807633060402]
We present our submission to the Field Affective Behavior Analysis (ABAW) 2021 competition.
The proposed method uses the pre-trained JAA model as the feature extractor.
Our model achieves 0.674 on the challenging Aff-Wild2 database.
arXiv Detail & Related papers (2021-07-09T12:33:38Z) - A Multi-modal and Multi-task Learning Method for Action Unit and
Expression Recognition [18.478011167414223]
We introduce a multi-modal and multi-task learning method by using both visual and audio information.
We achieve an AU score of 0.712 and an expression score of 0.477 on the validation set.
arXiv Detail & Related papers (2021-07-09T03:28:17Z) - NTIRE 2021 Multi-modal Aerial View Object Classification Challenge [88.89190054948325]
We introduce the first Challenge on Multi-modal Aerial View Object Classification (MAVOC) in conjunction with the NTIRE 2021 workshop at CVPR.
This challenge is composed of two different tracks using EO and SAR imagery.
We discuss the top methods submitted for this competition and evaluate their results on our blind test set.
arXiv Detail & Related papers (2021-07-02T16:55:08Z) - Analysing Affective Behavior in the second ABAW2 Competition [70.86998050535944]
The Affective Behavior Analysis in-the-wild (ABAW2) 2021 Competition is the second -- following the first very successful ABAW Competition held in conjunction with IEEE FG 2020- Competition that aims at automatically analyzing affect.
arXiv Detail & Related papers (2021-06-14T11:30:19Z) - Troubleshooting Blind Image Quality Models in the Wild [99.96661607178677]
Group maximum differentiation competition (gMAD) has been used to improve blind image quality assessment (BIQA) models.
We construct a set of "self-competitors," as random ensembles of pruned versions of the target model to be improved.
Diverse failures can then be efficiently identified via self-gMAD competition.
arXiv Detail & Related papers (2021-05-14T10:10:48Z) - TAL EmotioNet Challenge 2020 Rethinking the Model Chosen Problem in
Multi-Task Learning [24.365090805937083]
We pose the AU recognition problem as a multi-task learning problem.
The co-occurrence of the expression features and the head pose features are explored.
By choosing the optimal checkpoint for each AU, the recognition results are improved.
arXiv Detail & Related papers (2020-04-21T09:39:38Z) - Analysing Affective Behavior in the First ABAW 2020 Competition [49.90617840789334]
The Affective Behavior Analysis in-the-wild (ABAW) 2020 Competition is the first Competition aiming at automatic analysis of the three main behavior tasks.
We describe this Competition, to be held in conjunction with the IEEE Conference on Face and Gesture Recognition, May 2020, in Buenos Aires, Argentina.
We outline the evaluation metrics, present both the baseline system and the top-3 performing teams' methodologies per Challenge and finally present their obtained results.
arXiv Detail & Related papers (2020-01-30T15:41:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.