SS-MFAR : Semi-supervised Multi-task Facial Affect Recognition
- URL: http://arxiv.org/abs/2207.09012v1
- Date: Tue, 19 Jul 2022 01:38:15 GMT
- Title: SS-MFAR : Semi-supervised Multi-task Facial Affect Recognition
- Authors: Darshan Gera, Badveeti Naveen Siva Kumar, Bobbili Veerendra Raj Kumar,
S Balasubramanian
- Abstract summary: We introduce our submission to the Multi-Task-Learning Challenge at the 4th Affective Behavior Analysis in-the-wild (ABAW) 2022 Competition.
Our method Semi-supervised Multi-task Facial Affect Recognition titled bftextSS-MFAR uses a deep residual network with task specific classifiers for each of the tasks.
- Score: 3.823356975862006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic affect recognition has applications in many areas such as
education, gaming, software development, automotives, medical care, etc. but it
is non trivial task to achieve appreciable performance on in-the-wild data
sets. In-the-wild data sets though represent real-world scenarios better than
synthetic data sets, the former ones suffer from the problem of incomplete
labels. Inspired by semi-supervised learning, in this paper, we introduce our
submission to the Multi-Task-Learning Challenge at the 4th Affective Behavior
Analysis in-the-wild (ABAW) 2022 Competition. The three tasks that are
considered in this challenge are valence-arousal(VA) estimation, classification
of expressions into 6 basic (anger, disgust, fear, happiness, sadness,
surprise), neutral, and the 'other' category and 12 action units(AU) numbered
AU-\{1,2,4,6,7,10,12,15,23,24,25,26\}. Our method Semi-supervised Multi-task
Facial Affect Recognition titled \textbf{SS-MFAR} uses a deep residual network
with task specific classifiers for each of the tasks along with adaptive
thresholds for each expression class and semi-supervised learning for the
incomplete labels. Source code is available at
https://github.com/1980x/ABAW2022DMACS.
Related papers
- Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Identify ambiguous tasks combining crowdsourced labels by weighting
Areas Under the Margin [13.437403258942716]
Ambiguous tasks might fool expert workers, which is often harmful for the learning step.
We adapt the Area Under the Margin (AUM) to identify mislabeled data in crowdsourced learning scenarios.
We show that the WAUM can help discarding ambiguous tasks from the training set, leading to better generalization performance.
arXiv Detail & Related papers (2022-09-30T11:16:20Z) - Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [51.21190751266442]
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
arXiv Detail & Related papers (2022-04-08T03:37:56Z) - X-Learner: Learning Cross Sources and Tasks for Universal Visual
Representation [71.51719469058666]
We propose a representation learning framework called X-Learner.
X-Learner learns the universal feature of multiple vision tasks supervised by various sources.
X-Learner achieves strong performance on different tasks without extra annotations, modalities and computational costs.
arXiv Detail & Related papers (2022-03-16T17:23:26Z) - Multitask Multi-database Emotion Recognition [1.52292571922932]
We introduce our submission to the 2nd Affective Behavior Analysis in-the-wild (ABAW) 2021 competition.
We train a unified deep learning model on multi- databases to perform two tasks.
Experiment results show that the network have achieved promising results on the validation set of the AffWild2 database.
arXiv Detail & Related papers (2021-07-08T21:57:58Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - Automated Self-Supervised Learning for Graphs [37.14382990139527]
This work aims to investigate how to automatically leverage multiple pretext tasks effectively.
We make use of a key principle of many real-world graphs, i.e., homophily, as the guidance to effectively search various self-supervised pretext tasks.
We propose the AutoSSL framework which can automatically search over combinations of various self-supervised tasks.
arXiv Detail & Related papers (2021-06-10T03:09:20Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z) - Deep Multi-task Multi-label CNN for Effective Facial Attribute
Classification [53.58763562421771]
We propose a novel deep multi-task multi-label CNN, termed DMM-CNN, for effective Facial Attribute Classification (FAC)
Specifically, DMM-CNN jointly optimize two closely-related tasks (i.e., facial landmark detection and FAC) to improve the performance of FAC by taking advantage of multi-task learning.
Two different network architectures are respectively designed to extract features for two groups of attributes, and a novel dynamic weighting scheme is proposed to automatically assign the loss weight to each facial attribute during training.
arXiv Detail & Related papers (2020-02-10T12:34:16Z) - Multitask Emotion Recognition with Incomplete Labels [7.811459544911892]
We train a unified model to perform three tasks: facial action unit detection, expression classification, and valence-arousal estimation.
Most existing datasets do not contain labels for all three tasks.
We find that most of the student models outperform their teacher model on all the three tasks.
arXiv Detail & Related papers (2020-02-10T05:32:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.