AP20-OLR Challenge: Three Tasks and Their Baselines
- URL: http://arxiv.org/abs/2006.03473v4
- Date: Fri, 9 Oct 2020 09:08:08 GMT
- Title: AP20-OLR Challenge: Three Tasks and Their Baselines
- Authors: Zheng Li, Miao Zhao, Qingyang Hong, Lin Li, Zhiyuan Tang, Dong Wang,
Liming Song and Cheng Yang
- Abstract summary: The data profile, three tasks, the corresponding baselines, and the evaluation principles are introduced in this paper.
The AP20-OLR challenge includes more languages, dialects and real-life data provided by Speechocean and the NSFC M2ASR project.
- Score: 29.652143329022817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces the fifth oriental language recognition (OLR) challenge
AP20-OLR, which intends to improve the performance of language recognition
systems, along with APSIPA Annual Summit and Conference (APSIPA ASC). The data
profile, three tasks, the corresponding baselines, and the evaluation
principles are introduced in this paper. The AP20-OLR challenge includes more
languages, dialects and real-life data provided by Speechocean and the NSFC
M2ASR project, and all the data is free for participants. The challenge this
year still focuses on practical and challenging problems, with three tasks: (1)
cross-channel LID, (2) dialect identification and (3) noisy LID. Based on Kaldi
and Pytorch, recipes for i-vector and x-vector systems are also conducted as
baselines for the three tasks. These recipes will be online-published, and
available for participants to configure LID systems. The baseline results on
the three tasks demonstrate that those tasks in this challenge are worth paying
more efforts to achieve better performance.
Related papers
- GenAI Content Detection Task 2: AI vs. Human -- Academic Essay Authenticity Challenge [12.076440946525434]
The Academic Essay Authenticity Challenge was organized as part of the GenAI Content Detection shared tasks collocated with COLING 2025.
This challenge focuses on detecting machine-generated vs. human-authored essays for academic purposes.
The challenge involves two languages: English and Arabic.
This paper outlines the task formulation, details the dataset construction process, and explains the evaluation framework.
arXiv Detail & Related papers (2024-12-24T08:33:44Z) - Speak & Improve Challenge 2025: Tasks and Baseline Systems [28.877872578497854]
"Speak & Improve Challenge 2025: Spoken Language Assessment and Feedback" is a challenge associated with the ISCA SLaTE 2025 Workshop.
The goal of the challenge is to advance research on spoken language assessment and feedback, with tasks associated with both the underlying technology and language learning feedback.
The challenge has four shared tasks: Automatic Speech Recognition (ASR), Spoken Language Assessment (SLA), Spoken Grammatical Error Correction (SGEC), and Spoken Grammatical Error Correction Feedback (SGECF)
arXiv Detail & Related papers (2024-12-16T17:05:18Z) - The VoxCeleb Speaker Recognition Challenge: A Retrospective [75.40776645175585]
The VoxCeleb Speaker Recognition Challenges (VoxSRC) were a series of challenges and workshops that ran annually from 2019 to 2023.
The challenges primarily evaluated the tasks of speaker recognition and diarisation under various settings.
We provide a review of these challenges that covers: what they explored; the methods developed by the challenge participants and how these evolved.
arXiv Detail & Related papers (2024-08-27T08:57:31Z) - Overview of AI-Debater 2023: The Challenges of Argument Generation Tasks [62.443665295250035]
We present the results of the AI-Debater 2023 Challenge held by the Chinese Conference on Affect Computing (CCAC 2023)
In total, 32 competing teams register for the challenge, from which we received 11 successful submissions.
arXiv Detail & Related papers (2024-07-20T10:13:54Z) - Foundational Challenges in Assuring Alignment and Safety of Large Language Models [171.01569693871676]
This work identifies 18 foundational challenges in assuring the alignment and safety of large language models (LLMs)
Based on the identified challenges, we pose $200+$ concrete research questions.
arXiv Detail & Related papers (2024-04-15T16:58:28Z) - VoxSRC 2022: The Fourth VoxCeleb Speaker Recognition Challenge [95.6159736804855]
The VoxCeleb Speaker Recognition Challenge 2022 (VoxSRC-22) was held in conjunction with INTERSPEECH 2022.
The goal of this challenge was to evaluate how well state-of-the-art speaker recognition systems can diarise and recognise speakers from speech obtained "in the wild"
arXiv Detail & Related papers (2023-02-20T19:27:14Z) - L3DAS22 Challenge: Learning 3D Audio Sources in a Real Office
Environment [12.480610577162478]
The L3DAS22 Challenge is aimed at encouraging the development of machine learning strategies for 3D speech enhancement and 3D sound localization and detection.
This challenge improves and extends the tasks of the L3DAS21 edition.
arXiv Detail & Related papers (2022-02-21T17:05:39Z) - OLR 2021 Challenge: Datasets, Rules and Baselines [23.878103387338918]
The data profile, four tasks, two baselines, and the evaluation principles are introduced in this paper.
In addition to the Language Identification (LID) tasks, multilingual Automatic Speech Recognition (ASR) tasks are introduced to OLR 2021 Challenge for the first time.
arXiv Detail & Related papers (2021-07-23T09:57:29Z) - Oriental Language Recognition (OLR) 2020: Summary and Analysis [21.212345251874513]
The fifth Oriental Language Recognition (OLR) Challenge focuses on language recognition in a variety of complex environments.
This paper describes the three tasks, the database profile, and the final results of the challenge.
arXiv Detail & Related papers (2021-07-05T12:42:40Z) - Analysing Affective Behavior in the First ABAW 2020 Competition [49.90617840789334]
The Affective Behavior Analysis in-the-wild (ABAW) 2020 Competition is the first Competition aiming at automatic analysis of the three main behavior tasks.
We describe this Competition, to be held in conjunction with the IEEE Conference on Face and Gesture Recognition, May 2020, in Buenos Aires, Argentina.
We outline the evaluation metrics, present both the baseline system and the top-3 performing teams' methodologies per Challenge and finally present their obtained results.
arXiv Detail & Related papers (2020-01-30T15:41:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.