A transformer-based approach to video frame-level prediction in
Affective Behaviour Analysis In-the-wild
- URL: http://arxiv.org/abs/2303.09293v2
- Date: Sun, 19 Mar 2023 05:27:26 GMT
- Title: A transformer-based approach to video frame-level prediction in
Affective Behaviour Analysis In-the-wild
- Authors: Dang-Khanh Nguyen, Ngoc-Huynh Ho, Sudarshan Pant, Hyung-Jeong Yang
- Abstract summary: We propose our transformer-based model to handle Emotion Classification Task in the 5th Affective Behavior Analysis In-the-wild Competition.
By leveraging the attentive model and the synthetic dataset, we attain a score of 0.4775 on the validation set of Aff-Wild2.
- Score: 5.161531917413708
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, transformer architecture has been a dominating paradigm in
many applications, including affective computing. In this report, we propose
our transformer-based model to handle Emotion Classification Task in the 5th
Affective Behavior Analysis In-the-wild Competition. By leveraging the
attentive model and the synthetic dataset, we attain a score of 0.4775 on the
validation set of Aff-Wild2, the dataset provided by the organizer.
Related papers
- Learning on Transformers is Provable Low-Rank and Sparse: A One-layer Analysis [63.66763657191476]
We show that efficient numerical training and inference algorithms as low-rank computation have impressive performance for learning Transformer-based adaption.
We analyze how magnitude-based models affect generalization while improving adaption.
We conclude that proper magnitude-based has a slight on the testing performance.
arXiv Detail & Related papers (2024-06-24T23:00:58Z) - ExpPoint-MAE: Better interpretability and performance for self-supervised point cloud transformers [7.725095281624494]
We evaluate the effectiveness of Masked Autoencoding as a pretraining scheme, and explore Momentum Contrast as an alternative.
We observe that the transformer learns to attend to semantically meaningful regions, indicating that pretraining leads to a better understanding of the underlying geometry.
arXiv Detail & Related papers (2023-06-19T09:38:21Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Spatial-temporal Transformer for Affective Behavior Analysis [11.10521339384583]
We propose a Transformer with Multi-Head Attention framework to learn the distribution of both the spatial and temporal features.
The results fully demonstrate the effectiveness of our proposed model based on the Aff-Wild2 dataset.
arXiv Detail & Related papers (2023-03-19T04:34:17Z) - Multi-dataset Training of Transformers for Robust Action Recognition [75.5695991766902]
We study the task of robust feature representations, aiming to generalize well on multiple datasets for action recognition.
Here, we propose a novel multi-dataset training paradigm, MultiTrain, with the design of two new loss terms, namely informative loss and projection loss.
We verify the effectiveness of our method on five challenging datasets, Kinetics-400, Kinetics-700, Moments-in-Time, Activitynet and Something-something-v2.
arXiv Detail & Related papers (2022-09-26T01:30:43Z) - Transforming Model Prediction for Tracking [109.08417327309937]
Transformers capture global relations with little inductive bias, allowing it to learn the prediction of more powerful target models.
We train the proposed tracker end-to-end and validate its performance by conducting comprehensive experiments on multiple tracking datasets.
Our tracker sets a new state of the art on three benchmarks, achieving an AUC of 68.5% on the challenging LaSOT dataset.
arXiv Detail & Related papers (2022-03-21T17:59:40Z) - Automatic Pharma News Categorization [0.0]
We use a text dataset consisting of 23 news categories relevant to pharma information science.
We compare the fine-tuning performance of multiple transformer models in a classification task.
We propose an ensemble model consisting of the top performing individual predictors.
arXiv Detail & Related papers (2021-12-28T08:42:16Z) - End-to-End Trainable Multi-Instance Pose Estimation with Transformers [68.93512627479197]
We propose a new end-to-end trainable approach for multi-instance pose estimation by combining a convolutional neural network with a transformer.
Inspired by recent work on end-to-end trainable object detection with transformers, we use a transformer encoder-decoder architecture together with a bipartite matching scheme to directly regress the pose of all individuals in a given image.
Our model, called POse Estimation Transformer (POET), is trained using a novel set-based global loss that consists of a keypoint loss, a keypoint visibility loss, a center loss and a class loss.
arXiv Detail & Related papers (2021-03-22T18:19:22Z) - Variational Transformers for Diverse Response Generation [71.53159402053392]
Variational Transformer (VT) is a variational self-attentive feed-forward sequence model.
VT combines the parallelizability and global receptive field computation of the Transformer with the variational nature of the CVAE.
We explore two types of VT: 1) modeling the discourse-level diversity with a global latent variable; and 2) augmenting the Transformer decoder with a sequence of finegrained latent variables.
arXiv Detail & Related papers (2020-03-28T07:48:02Z) - Affective Expression Analysis in-the-wild using Multi-Task Temporal
Statistical Deep Learning Model [6.024865915538501]
We present an affective expression analysis model that deals with the above challenges.
We experimented on Aff-Wild2 dataset, a large-scale dataset for ABAW Challenge.
arXiv Detail & Related papers (2020-02-21T04:06:03Z) - Gradient-Based Adversarial Training on Transformer Networks for
Detecting Check-Worthy Factual Claims [3.7543966923106438]
We introduce the first adversarially-regularized, transformer-based claim spotter model.
We obtain a 4.70 point F1-score improvement over current state-of-the-art models.
We propose a method to apply adversarial training to transformer models.
arXiv Detail & Related papers (2020-02-18T16:51:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.