REACT 2024: the Second Multiple Appropriate Facial Reaction Generation
Challenge
- URL: http://arxiv.org/abs/2401.05166v1
- Date: Wed, 10 Jan 2024 14:01:51 GMT
- Title: REACT 2024: the Second Multiple Appropriate Facial Reaction Generation
Challenge
- Authors: Siyang Song, Micol Spitale, Cheng Luo, Cristina Palmero, German
Barquero, Hengde Zhu, Sergio Escalera, Michel Valstar, Tobias Baur, Fabien
Ringeval, Elisabeth Andre, Hatice Gunes
- Abstract summary: In dyadic interactions, humans communicate their intentions and state of mind using verbal and non-verbal cues.
How to develop a machine learning (ML) model that can automatically generate multiple appropriate, diverse, realistic and synchronised human facial reactions is a challenging task.
This paper presents the guidelines of the REACT 2024 challenge and the dataset utilized in the challenge.
- Score: 36.84914349494818
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In dyadic interactions, humans communicate their intentions and state of mind
using verbal and non-verbal cues, where multiple different facial reactions
might be appropriate in response to a specific speaker behaviour. Then, how to
develop a machine learning (ML) model that can automatically generate multiple
appropriate, diverse, realistic and synchronised human facial reactions from an
previously unseen speaker behaviour is a challenging task. Following the
successful organisation of the first REACT challenge (REACT 2023), this edition
of the challenge (REACT 2024) employs a subset used by the previous challenge,
which contains segmented 30-secs dyadic interaction clips originally recorded
as part of the NOXI and RECOLA datasets, encouraging participants to develop
and benchmark Machine Learning (ML) models that can generate multiple
appropriate facial reactions (including facial image sequences and their
attributes) given an input conversational partner's stimulus under various
dyadic video conference scenarios. This paper presents: (i) the guidelines of
the REACT 2024 challenge; (ii) the dataset utilized in the challenge; and (iii)
the performance of the baseline systems on the two proposed sub-challenges:
Offline Multiple Appropriate Facial Reaction Generation and Online Multiple
Appropriate Facial Reaction Generation, respectively. The challenge baseline
code is publicly available at
https://github.com/reactmultimodalchallenge/baseline_react2024.
Related papers
- Overview of AI-Debater 2023: The Challenges of Argument Generation Tasks [62.443665295250035]
We present the results of the AI-Debater 2023 Challenge held by the Chinese Conference on Affect Computing (CCAC 2023)
In total, 32 competing teams register for the challenge, from which we received 11 successful submissions.
arXiv Detail & Related papers (2024-07-20T10:13:54Z) - The MuSe 2024 Multimodal Sentiment Analysis Challenge: Social Perception and Humor Recognition [64.5207572897806]
The Multimodal Sentiment Analysis Challenge (MuSe) 2024 addresses two contemporary multimodal affect and sentiment analysis problems.
In the Social Perception Sub-Challenge (MuSe-Perception), participants will predict 16 different social attributes of individuals.
The Cross-Cultural Humor Detection Sub-Challenge (MuSe-Humor) dataset expands upon the Passau Spontaneous Football Coach Humor dataset.
arXiv Detail & Related papers (2024-06-11T22:26:20Z) - Second Edition FRCSyn Challenge at CVPR 2024: Face Recognition Challenge in the Era of Synthetic Data [104.45155847778584]
This paper presents an overview of the 2nd edition of the Face Recognition Challenge in the Era of Synthetic Data (FRCSyn)
FRCSyn aims to investigate the use of synthetic data in face recognition to address current technological limitations.
arXiv Detail & Related papers (2024-04-16T08:15:10Z) - REACT2023: the first Multi-modal Multiple Appropriate Facial Reaction
Generation Challenge [28.777465429875303]
The Multi-modal Multiple Appropriate Facial Reaction Generation Challenge (REACT2023) is the first competition event focused on evaluating multimedia processing and machine learning techniques for generating human-appropriate facial reactions in various dyadic interaction scenarios.
The goal of the challenge is to provide the first benchmark test set for multi-modal information processing and to foster collaboration among the audio, visual, and audio-visual affective computing communities.
arXiv Detail & Related papers (2023-06-11T04:15:56Z) - ReactFace: Online Multiple Appropriate Facial Reaction Generation in Dyadic Interactions [46.66378299720377]
In dyadic interaction, predicting the listener's facial reactions is challenging as different reactions could be appropriate in response to the same speaker's behaviour.
This paper reformulates the task as an extrapolation or prediction problem, and proposes a novel framework (called ReactFace) to generate multiple different but appropriate facial reactions.
arXiv Detail & Related papers (2023-05-25T05:55:53Z) - Reversible Graph Neural Network-based Reaction Distribution Learning for
Multiple Appropriate Facial Reactions Generation [22.579200870471475]
This paper proposes the first multiple appropriate facial reaction generation framework.
It re-formulates the one-to-many mapping facial reaction generation problem as a one-to-one mapping problem.
Experimental results demonstrate that our approach outperforms existing models in generating more appropriate, realistic, and synchronized facial reactions.
arXiv Detail & Related papers (2023-05-24T15:56:26Z) - Face-to-Face Contrastive Learning for Social Intelligence
Question-Answering [55.90243361923828]
multimodal methods have set the state of the art on many tasks, but have difficulty modeling the complex face-to-face conversational dynamics.
We propose Face-to-Face Contrastive Learning (F2F-CL), a graph neural network designed to model social interactions.
We experimentally evaluated the challenging Social-IQ dataset and show state-of-the-art results.
arXiv Detail & Related papers (2022-07-29T20:39:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.