MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised
Learning
- URL: http://arxiv.org/abs/2304.08981v2
- Date: Thu, 14 Sep 2023 04:03:28 GMT
- Title: MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised
Learning
- Authors: Zheng Lian, Haiyang Sun, Licai Sun, Kang Chen, Mingyu Xu, Kexin Wang,
Ke Xu, Yu He, Ying Li, Jinming Zhao, Ye Liu, Bin Liu, Jiangyan Yi, Meng Wang,
Erik Cambria, Guoying Zhao, Bj\"orn W. Schuller, Jianhua Tao
- Abstract summary: The first Multimodal Emotion Recognition Challenge (MER 2023) was successfully held at ACM Multimedia.
This paper introduces the motivation behind this challenge, describe the benchmark dataset, and provide some statistics about participants.
We believe this high-quality dataset can become a new benchmark in multimodal emotion recognition, especially for the Chinese research community.
- Score: 90.17500229142755
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The first Multimodal Emotion Recognition Challenge (MER 2023) was
successfully held at ACM Multimedia. The challenge focuses on system robustness
and consists of three distinct tracks: (1) MER-MULTI, where participants are
required to recognize both discrete and dimensional emotions; (2) MER-NOISE, in
which noise is added to test videos for modality robustness evaluation; (3)
MER-SEMI, which provides a large amount of unlabeled samples for
semi-supervised learning. In this paper, we introduce the motivation behind
this challenge, describe the benchmark dataset, and provide some statistics
about participants. To continue using this dataset after MER 2023, please sign
a new End User License Agreement and send it to our official email address
merchallenge.contact@gmail.com. We believe this high-quality dataset can become
a new benchmark in multimodal emotion recognition, especially for the Chinese
research community.
Related papers
- Leveraging Contrastive Learning and Self-Training for Multimodal Emotion Recognition with Limited Labeled Samples [18.29910296652917]
We present our submission solutions for the Semi-Supervised Learning Sub-Challenge (MER2024-SEMI)
This challenge tackles the issue of limited annotated data in emotion recognition.
Our proposed method is validated to be effective on the MER2024-SEMI Challenge, achieving a weighted average F-score of 88.25% and ranking 6th on the leaderboard.
arXiv Detail & Related papers (2024-08-23T11:33:54Z) - SZTU-CMU at MER2024: Improving Emotion-LLaMA with Conv-Attention for Multimodal Emotion Recognition [65.19303535139453]
We present our winning approach for the MER-NOISE and MER-OV tracks of the MER2024 Challenge on multimodal emotion recognition.
Our system leverages the advanced emotional understanding capabilities of Emotion-LLaMA to generate high-quality annotations for unlabeled samples.
For the MER-OV track, our utilization of Emotion-LLaMA for open-vocabulary annotation yields an 8.52% improvement in average accuracy and recall compared to GPT-4V.
arXiv Detail & Related papers (2024-08-20T02:46:03Z) - The MuSe 2024 Multimodal Sentiment Analysis Challenge: Social Perception and Humor Recognition [64.5207572897806]
The Multimodal Sentiment Analysis Challenge (MuSe) 2024 addresses two contemporary multimodal affect and sentiment analysis problems.
In the Social Perception Sub-Challenge (MuSe-Perception), participants will predict 16 different social attributes of individuals.
The Cross-Cultural Humor Detection Sub-Challenge (MuSe-Humor) dataset expands upon the Passau Spontaneous Football Coach Humor dataset.
arXiv Detail & Related papers (2024-06-11T22:26:20Z) - MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition [102.76954967225231]
We organize the MER series of competitions to promote the development of this field.
Last year, we launched MER2023, focusing on three interesting topics: multi-label learning, noise robustness, and semi-supervised learning.
This year, besides expanding the dataset size, we introduce a new track around open-vocabulary emotion recognition.
arXiv Detail & Related papers (2024-04-26T02:05:20Z) - The MuSe 2023 Multimodal Sentiment Analysis Challenge: Mimicked
Emotions, Cross-Cultural Humour, and Personalisation [69.13075715686622]
MuSe 2023 is a set of shared tasks addressing three different contemporary multimodal affect and sentiment analysis problems.
MuSe 2023 seeks to bring together a broad audience from different research communities.
arXiv Detail & Related papers (2023-05-05T08:53:57Z) - Multimodal Emotion Recognition with Modality-Pairwise Unsupervised
Contrastive Loss [80.79641247882012]
We focus on unsupervised feature learning for Multimodal Emotion Recognition (MER)
We consider discrete emotions, and as modalities text, audio and vision are used.
Our method, as being based on contrastive loss between pairwise modalities, is the first attempt in MER literature.
arXiv Detail & Related papers (2022-07-23T10:11:24Z) - ChaLearn LAP Large Scale Signer Independent Isolated Sign Language
Recognition Challenge: Design, Results and Future Research [28.949528008976493]
This work summarises the ChaLearn LAP Large Scale Signer Independent Isolated SLR Challenge, organised at CVPR 2021.
We discuss the challenge design, top winning solutions and suggestions for future research.
Winning teams achieved more than 96% recognition rate, and their approaches benefited from pose/hand/face estimation, transfer learning, external data, fusion/ensemble of modalities and different strategies to model-temporal information.
arXiv Detail & Related papers (2021-05-11T14:17:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.