Overview of AI-Debater 2023: The Challenges of Argument Generation Tasks
- URL: http://arxiv.org/abs/2407.14829v2
- Date: Wed, 24 Jul 2024 15:09:29 GMT
- Title: Overview of AI-Debater 2023: The Challenges of Argument Generation Tasks
- Authors: Jiayu Lin, Guanrong Chen, Bojun Jin, Chenyang Li, Shutong Jia, Wancong Lin, Yang Sun, Yuhang He, Caihua Yang, Jianzhu Bao, Jipeng Wu, Wen Su, Jinglu Chen, Xinyi Li, Tianyu Chen, Mingjie Han, Shuaiwen Du, Zijian Wang, Jiyin Li, Fuzhong Suo, Hao Wang, Nuanchen Lin, Xuanjing Huang, Changjian Jiang, RuiFeng Xu, Long Zhang, Jiuxin Cao, Ting Jin, Zhongyu Wei,
- Abstract summary: We present the results of the AI-Debater 2023 Challenge held by the Chinese Conference on Affect Computing (CCAC 2023)
In total, 32 competing teams register for the challenge, from which we received 11 successful submissions.
- Score: 62.443665295250035
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we present the results of the AI-Debater 2023 Challenge held by the Chinese Conference on Affect Computing (CCAC 2023), and introduce the related datasets. We organize two tracks to handle the argumentative generation tasks in different scenarios, namely, Counter-Argument Generation (Track 1) and Claim-based Argument Generation (Track 2). Each track is equipped with its distinct dataset and baseline model respectively. In total, 32 competing teams register for the challenge, from which we received 11 successful submissions. In this paper, we will present the results of the challenge and a summary of the systems, highlighting commonalities and innovations among participating systems. Datasets and baseline models of the AI-Debater 2023 Challenge have been already released and can be accessed through the official website of the challenge.
Related papers
- The ISCSLP 2024 Conversational Voice Clone (CoVoC) Challenge: Tasks, Results and Findings [18.994388357437924]
The ISCSLP 2024 Conversational Voice Clone (CoVoC) Challenge aims to benchmark and advance zero-shot spontaneous style voice cloning.
This paper details the data, tracks, submitted systems, evaluation results, and findings.
arXiv Detail & Related papers (2024-10-31T09:39:49Z) - The VoxCeleb Speaker Recognition Challenge: A Retrospective [75.40776645175585]
The VoxCeleb Speaker Recognition Challenges (VoxSRC) were a series of challenges and workshops that ran annually from 2019 to 2023.
The challenges primarily evaluated the tasks of speaker recognition and diarisation under various settings.
We provide a review of these challenges that covers: what they explored; the methods developed by the challenge participants and how these evolved.
arXiv Detail & Related papers (2024-08-27T08:57:31Z) - The Second DISPLACE Challenge : DIarization of SPeaker and LAnguage in Conversational Environments [28.460119283649913]
The dataset contains 158 hours of speech, consisting of both supervised and unsupervised mono-channel far-field recordings.
12 hours of close-field mono-channel recordings were provided for the ASR track conducted on 5 Indian languages.
We have compared our baseline models and the team's performances on evaluation data of DISPLACE-2023 to emphasize the advancements made in this second version of the challenge.
arXiv Detail & Related papers (2024-06-13T17:32:32Z) - The 2nd FutureDial Challenge: Dialog Systems with Retrieval Augmented Generation (FutureDial-RAG) [23.849336345191556]
The challenge builds upon the MobileCS2 dataset, a real-life customer service datasets with nearly 3000 high-quality dialogs.
We define two tasks, track 1 for knowledge retrieval and track 2 for response generation, which are core research questions in dialog systems with RAG.
We build baseline systems for the two tracks and design metrics to measure whether the systems can perform accurate retrieval and generate informative and coherent response.
arXiv Detail & Related papers (2024-05-21T07:35:21Z) - Human Understanding AI Paper Challenge 2024 -- Dataset Design [0.0]
In 2024, we will hold a research paper competition (the third Human Understanding AI Paper Challenge) for the research and development of artificial intelligence technologies to understand human daily life.
This document introduces the datasets that will be provided to participants in the competition, and summarizes the issues to consider in data processing and learning model development.
arXiv Detail & Related papers (2024-03-25T07:48:34Z) - REACT 2024: the Second Multiple Appropriate Facial Reaction Generation
Challenge [36.84914349494818]
In dyadic interactions, humans communicate their intentions and state of mind using verbal and non-verbal cues.
How to develop a machine learning (ML) model that can automatically generate multiple appropriate, diverse, realistic and synchronised human facial reactions is a challenging task.
This paper presents the guidelines of the REACT 2024 challenge and the dataset utilized in the challenge.
arXiv Detail & Related papers (2024-01-10T14:01:51Z) - ICML 2023 Topological Deep Learning Challenge : Design and Results [83.5003281210199]
The competition asked participants to provide open-source implementations of topological neural networks from the literature.
The challenge attracted twenty-eight qualifying submissions in its two-month duration.
This paper describes the design of the challenge and summarizes its main findings.
arXiv Detail & Related papers (2023-09-26T18:49:30Z) - NICE: CVPR 2023 Challenge on Zero-shot Image Captioning [149.28330263581012]
NICE project is designed to challenge the computer vision community to develop robust image captioning models.
Report includes information on the newly proposed NICE dataset, evaluation methods, challenge results, and technical details of top-ranking entries.
arXiv Detail & Related papers (2023-09-05T05:32:19Z) - Retrospectives on the Embodied AI Workshop [238.302290980995]
We focus on 13 challenges presented at the Embodied AI Workshop at CVPR.
These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language.
We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models.
arXiv Detail & Related papers (2022-10-13T09:00:52Z) - Analysing Affective Behavior in the First ABAW 2020 Competition [49.90617840789334]
The Affective Behavior Analysis in-the-wild (ABAW) 2020 Competition is the first Competition aiming at automatic analysis of the three main behavior tasks.
We describe this Competition, to be held in conjunction with the IEEE Conference on Face and Gesture Recognition, May 2020, in Buenos Aires, Argentina.
We outline the evaluation metrics, present both the baseline system and the top-3 performing teams' methodologies per Challenge and finally present their obtained results.
arXiv Detail & Related papers (2020-01-30T15:41:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.