Multi-Label Class Balancing Algorithm for Action Unit Detection
- URL: http://arxiv.org/abs/2002.03238v1
- Date: Sat, 8 Feb 2020 21:56:28 GMT
- Title: Multi-Label Class Balancing Algorithm for Action Unit Detection
- Authors: Jaspar Pahl, Ines Rieger, Dominik Seuss
- Abstract summary: Isolated facial movements, so-called Action Units, can describe combined emotions or physical states such as pain.
This submission is subject to the Affective Behavior Analysis in-the-wild (ABAW) challenge at the IEEE Conference on Face and Gesture Recognition 2020.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Isolated facial movements, so-called Action Units, can describe combined
emotions or physical states such as pain. As datasets are limited and mostly
imbalanced, we present an approach incorporating a multi-label class balancing
algorithm. This submission is subject to the Action Unit detection task of the
Affective Behavior Analysis in-the-wild (ABAW) challenge at the IEEE Conference
on Face and Gesture Recognition 2020.
Related papers
- The impact of Compositionality in Zero-shot Multi-label action recognition for Object-based tasks [4.971065912401385]
We propose Dual-VCLIP, a unified approach for zero-shot multi-label action recognition.
Dual-VCLIP enhances VCLIP, a zero-shot action recognition method, with the DualCoOp method for multi-label image classification.
We validate our method on the Charades dataset that includes a majority of object-based actions.
arXiv Detail & Related papers (2024-05-14T15:28:48Z) - Learning Disentangled Identifiers for Action-Customized Text-to-Image Generation [34.11373539564126]
This study focuses on a novel task in text-to-image (T2I) generation, namely action customization.
The objective of this task is to learn the co-existing action from limited data and generalize it to unseen humans or even animals.
arXiv Detail & Related papers (2023-11-27T14:07:13Z) - AIMS: All-Inclusive Multi-Level Segmentation [93.5041381700744]
We propose a new task, All-Inclusive Multi-Level (AIMS), which segments visual regions into three levels: part, entity, and relation.
We also build a unified AIMS model through multi-dataset multi-task training to address the two major challenges of annotation inconsistency and task correlation.
arXiv Detail & Related papers (2023-05-28T16:28:49Z) - Multi-modal Multi-label Facial Action Unit Detection with Transformer [7.30287060715476]
This paper describes our submission to the third Affective Behavior Analysis (ABAW) 2022 competition.
We proposed a transfomer based model to detect facial action unit (FAU) in video.
arXiv Detail & Related papers (2022-03-24T18:59:31Z) - The Overlooked Classifier in Human-Object Interaction Recognition [82.20671129356037]
We encode the semantic correlation among classes into the classification head by initializing the weights with language embeddings of HOIs.
We propose a new loss named LSE-Sign to enhance multi-label learning on a long-tailed dataset.
Our simple yet effective method enables detection-free HOI classification, outperforming the state-of-the-arts that require object detection and human pose by a clear margin.
arXiv Detail & Related papers (2022-03-10T23:35:00Z) - Few-Shot Fine-Grained Action Recognition via Bidirectional Attention and
Contrastive Meta-Learning [51.03781020616402]
Fine-grained action recognition is attracting increasing attention due to the emerging demand of specific action understanding in real-world applications.
We propose a few-shot fine-grained action recognition problem, aiming to recognize novel fine-grained actions with only few samples given for each class.
Although progress has been made in coarse-grained actions, existing few-shot recognition methods encounter two issues handling fine-grained actions.
arXiv Detail & Related papers (2021-08-15T02:21:01Z) - Seeing Differently, Acting Similarly: Imitation Learning with
Heterogeneous Observations [126.78199124026398]
In many real-world imitation learning tasks, the demonstrator and the learner have to act in different but full observation spaces.
In this work, we model the above learning problem as Heterogeneous Observations Learning (HOIL)
We propose the Importance Weighting with REjection (IWRE) algorithm based on the techniques of importance-weighting, learning with rejection, and active querying to solve the key challenge of occupancy measure matching.
arXiv Detail & Related papers (2021-06-17T05:44:04Z) - Dynamic Semantic Matching and Aggregation Network for Few-shot Intent
Detection [69.2370349274216]
Few-shot Intent Detection is challenging due to the scarcity of available annotated utterances.
Semantic components are distilled from utterances via multi-head self-attention.
Our method provides a comprehensive matching measure to enhance representations of both labeled and unlabeled instances.
arXiv Detail & Related papers (2020-10-06T05:16:38Z) - Multi-label Learning with Missing Values using Combined Facial Action
Unit Datasets [0.0]
Facial action units allow an objective, standardized description of facial micro movements which can be used to describe emotions in human faces.
Annotating data for action units is an expensive and time-consuming task, which leads to a scarce data situation.
We present our approach to create a combined database and an algorithm capable of learning under the presence of missing labels.
arXiv Detail & Related papers (2020-08-17T11:58:06Z) - FineGym: A Hierarchical Video Dataset for Fine-grained Action
Understanding [118.32912239230272]
FineGym is a new action recognition dataset built on top of gymnastic videos.
It provides temporal annotations at both action and sub-action levels with a three-level semantic hierarchy.
This new level of granularity presents significant challenges for action recognition.
arXiv Detail & Related papers (2020-04-14T17:55:21Z) - Unique Class Group Based Multi-Label Balancing Optimizer for Action Unit
Detection [0.0]
We show how optimized balancing and then augmentation can improve Action Unit detection.
We ranked third in the Affective Behavior Analysis in-the-wild (ABAW) challenge for the Action Unit detection task.
arXiv Detail & Related papers (2020-03-05T15:34:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.