Multi-Task Learning for Emotion Descriptors Estimation at the fourth
ABAW Challenge
- URL: http://arxiv.org/abs/2207.09716v1
- Date: Wed, 20 Jul 2022 07:39:12 GMT
- Title: Multi-Task Learning for Emotion Descriptors Estimation at the fourth
ABAW Challenge
- Authors: Yanan Chang, Yi Wu, Xiangyu Miao, Jiahe Wang, Shangfei Wang
- Abstract summary: We introduce multi-task learning framework to enhance the performance of three related tasks in the wild.
We conduct experiments on the provided training and validating data.
- Score: 24.529527087437202
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial valence/arousal, expression and action unit are related tasks in
facial affective analysis. However, the tasks only have limited performance in
the wild due to the various collected conditions. The 4th competition on
affective behavior analysis in the wild (ABAW) provided images with
valence/arousal, expression and action unit labels. In this paper, we introduce
multi-task learning framework to enhance the performance of three related tasks
in the wild. Feature sharing and label fusion are used to utilize their
relations. We conduct experiments on the provided training and validating data.
Related papers
- Affective Behaviour Analysis via Progressive Learning [23.455163723584427]
We present our methods and experimental results for the two competition tracks.
We train a Masked-Auto in a self-supervised manner to attain high-quality facial features.
We utilize curriculum learning to transition the model from recognizing single expressions to recognizing compound expressions.
arXiv Detail & Related papers (2024-07-24T02:24:21Z) - Facial Affective Behavior Analysis with Instruction Tuning [58.332959295770614]
Facial affective behavior analysis (FABA) is crucial for understanding human mental states from images.
Traditional approaches primarily deploy models to discriminate among discrete emotion categories, and lack the fine granularity and reasoning capability for complex facial behaviors.
We introduce an instruction-following dataset for two FABA tasks, emotion and action unit recognition, and a benchmark FABA-Bench with a new metric considering both recognition and generation ability.
We also introduce a facial prior expert module with face structure knowledge and a low-rank adaptation module into pre-trained MLLM.
arXiv Detail & Related papers (2024-04-07T19:23:28Z) - The 6th Affective Behavior Analysis in-the-wild (ABAW) Competition [53.718777420180395]
This paper describes the 6th Affective Behavior Analysis in-the-wild (ABAW) Competition.
The 6th ABAW Competition addresses contemporary challenges in understanding human emotions and behaviors.
arXiv Detail & Related papers (2024-02-29T16:49:38Z) - Instruction Tuning for Few-Shot Aspect-Based Sentiment Analysis [72.9124467710526]
generative approaches have been proposed to extract all four elements as (one or more) quadruplets from text as a single task.
We propose a unified framework for solving ABSA, and the associated sub-tasks to improve the performance in few-shot scenarios.
arXiv Detail & Related papers (2022-10-12T23:38:57Z) - Two-Aspect Information Fusion Model For ABAW4 Multi-task Challenge [41.32053075381269]
The task of ABAW is to predict frame-level emotion descriptors from videos.
We propose a novel end to end architecture to achieve full integration of different types of information.
arXiv Detail & Related papers (2022-07-23T01:48:51Z) - An Ensemble Approach for Multiple Emotion Descriptors Estimation Using
Multi-task Learning [12.589338141771385]
This paper illustrates our submission method to the fourth Affective Behavior Analysis in-the-Wild (ABAW) Competition.
Instead of using only face information, we employ full information from a provided dataset containing face and the context around the face.
The proposed system achieves the performance of 0.917 on the MTL Challenge validation dataset.
arXiv Detail & Related papers (2022-07-22T04:57:56Z) - Multi-task Cross Attention Network in Facial Behavior Analysis [7.910908058662372]
We present our solution for the Multi-Task Learning challenge of the Affective Behavior Analysis in-the-wild competition.
The challenge is a combination of three tasks: action unit detection, facial expression recognition and valance-arousal estimation.
We introduce a cross-attentive module to improve multi-task learning performance.
arXiv Detail & Related papers (2022-07-21T04:07:07Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Prior Aided Streaming Network for Multi-task Affective Recognitionat the
2nd ABAW2 Competition [9.188777864190204]
We introduce our submission to the 2nd Affective Behavior Analysis in-the-wild (ABAW2) Competition.
In dealing with different emotion representations, we propose a multi-task streaming network.
We leverage an advanced facial expression embedding as prior knowledge.
arXiv Detail & Related papers (2021-07-08T09:35:08Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z) - An Iterative Multi-Knowledge Transfer Network for Aspect-Based Sentiment
Analysis [73.7488524683061]
We propose a novel Iterative Multi-Knowledge Transfer Network (IMKTN) for end-to-end ABSA.
Our IMKTN transfers the task-specific knowledge from any two of the three subtasks to another one at the token level by utilizing a well-designed routing algorithm.
Experimental results on three benchmark datasets demonstrate the effectiveness and superiority of our approach.
arXiv Detail & Related papers (2020-04-04T13:49:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.