Impact of Action Unit Occurrence Patterns on Detection
- URL: http://arxiv.org/abs/2010.07982v1
- Date: Thu, 15 Oct 2020 19:03:05 GMT
- Title: Impact of Action Unit Occurrence Patterns on Detection
- Authors: Saurabh Hinduja, Shaun Canavan, Saandeep Aathreya
- Abstract summary: We investigate the impact of action unit occurrence patterns on detection of action units.
Our findings suggest that action unit occurrence patterns strongly impact evaluation metrics.
We propose a new approach to explicitly train deep neural networks using the occurrence patterns to boost the accuracy of action unit detection.
- Score: 0.3670422696827526
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detecting action units is an important task in face analysis, especially in
facial expression recognition. This is due, in part, to the idea that
expressions can be decomposed into multiple action units. In this paper we
investigate the impact of action unit occurrence patterns on detection of
action units. To facilitate this investigation, we review state of the art
literature, for AU detection, on 2 state-of-the-art face databases that are
commonly used for this task, namely DISFA, and BP4D. Our findings, from this
literature review, suggest that action unit occurrence patterns strongly impact
evaluation metrics (e.g. F1-binary). Along with the literature review, we also
conduct multi and single action unit detection, as well as propose a new
approach to explicitly train deep neural networks using the occurrence patterns
to boost the accuracy of action unit detection. These experiments validate that
action unit patterns directly impact the evaluation metrics.
Related papers
- Uncertainty-Guided Appearance-Motion Association Network for Out-of-Distribution Action Detection [4.938957922033169]
Out-of-distribution (OOD) detection targets to detect and reject test samples with semantic shifts.
We propose a novel Uncertainty-Guided Appearance-Motion Association Network (UAAN)
We show that UAAN beats state-of-the-art methods by a significant margin, illustrating its effectiveness.
arXiv Detail & Related papers (2024-09-16T02:53:49Z) - ODAM: Gradient-based instance-specific visual explanations for object
detection [51.476702316759635]
gradient-weighted Object Detector Activation Maps (ODAM)
ODAM produces heat maps that show the influence of regions on the detector's decision for each predicted attribute.
We propose Odam-NMS, which considers the information of the model's explanation for each prediction to distinguish duplicate detected objects.
arXiv Detail & Related papers (2023-04-13T09:20:26Z) - DOAD: Decoupled One Stage Action Detection Network [77.14883592642782]
Localizing people and recognizing their actions from videos is a challenging task towards high-level video understanding.
Existing methods are mostly two-stage based, with one stage for person bounding box generation and the other stage for action recognition.
We present a decoupled one-stage network dubbed DOAD, to improve the efficiency for-temporal action detection.
arXiv Detail & Related papers (2023-04-01T08:06:43Z) - Impact of Video Processing Operations in Deepfake Detection [13.334500258498798]
Digital face manipulation in video has attracted extensive attention due to the increased risk to public trust.
Deep learning-based deepfake detection methods have been developed and have shown impressive results.
The performance of these detectors is often evaluated using benchmarks that hardly reflect real-world situations.
arXiv Detail & Related papers (2023-03-30T09:24:17Z) - ReAct: Temporal Action Detection with Relational Queries [84.76646044604055]
This work aims at advancing temporal action detection (TAD) using an encoder-decoder framework with action queries.
We first propose a relational attention mechanism in the decoder, which guides the attention among queries based on their relations.
Lastly, we propose to predict the localization quality of each action query at inference in order to distinguish high-quality queries.
arXiv Detail & Related papers (2022-07-14T17:46:37Z) - Multi-modal Multi-label Facial Action Unit Detection with Transformer [7.30287060715476]
This paper describes our submission to the third Affective Behavior Analysis (ABAW) 2022 competition.
We proposed a transfomer based model to detect facial action unit (FAU) in video.
arXiv Detail & Related papers (2022-03-24T18:59:31Z) - SegTAD: Precise Temporal Action Detection via Semantic Segmentation [65.01826091117746]
We formulate the task of temporal action detection in a novel perspective of semantic segmentation.
Owing to the 1-dimensional property of TAD, we are able to convert the coarse-grained detection annotations to fine-grained semantic segmentation annotations for free.
We propose an end-to-end framework SegTAD composed of a 1D semantic segmentation network (1D-SSN) and a proposal detection network (PDN)
arXiv Detail & Related papers (2022-03-03T06:52:13Z) - Measuring the Contribution of Multiple Model Representations in
Detecting Adversarial Instances [0.0]
Our paper describes two approaches that incorporate representations from multiple models for detecting adversarial examples.
For many of the scenarios we consider, the results show that performance increases with the number of underlying models used for extracting representations.
arXiv Detail & Related papers (2021-11-13T04:24:57Z) - Technical Report: Disentangled Action Parsing Networks for Accurate
Part-level Action Parsing [65.87931036949458]
Part-level Action Parsing aims at part state parsing for boosting action recognition in videos.
We present a simple yet effective approach, named disentangled action parsing (DAP)
arXiv Detail & Related papers (2021-11-05T02:29:32Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.