AP20-OLR Challenge: Three Tasks and Their Baselines
- URL: http://arxiv.org/abs/2006.03473v4
- Date: Fri, 9 Oct 2020 09:08:08 GMT
- Title: AP20-OLR Challenge: Three Tasks and Their Baselines
- Authors: Zheng Li, Miao Zhao, Qingyang Hong, Lin Li, Zhiyuan Tang, Dong Wang,
Liming Song and Cheng Yang
- Abstract summary: The data profile, three tasks, the corresponding baselines, and the evaluation principles are introduced in this paper.
The AP20-OLR challenge includes more languages, dialects and real-life data provided by Speechocean and the NSFC M2ASR project.
- Score: 29.652143329022817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces the fifth oriental language recognition (OLR) challenge
AP20-OLR, which intends to improve the performance of language recognition
systems, along with APSIPA Annual Summit and Conference (APSIPA ASC). The data
profile, three tasks, the corresponding baselines, and the evaluation
principles are introduced in this paper. The AP20-OLR challenge includes more
languages, dialects and real-life data provided by Speechocean and the NSFC
M2ASR project, and all the data is free for participants. The challenge this
year still focuses on practical and challenging problems, with three tasks: (1)
cross-channel LID, (2) dialect identification and (3) noisy LID. Based on Kaldi
and Pytorch, recipes for i-vector and x-vector systems are also conducted as
baselines for the three tasks. These recipes will be online-published, and
available for participants to configure LID systems. The baseline results on
the three tasks demonstrate that those tasks in this challenge are worth paying
more efforts to achieve better performance.
Related papers
- The VoxCeleb Speaker Recognition Challenge: A Retrospective [75.40776645175585]
The VoxCeleb Speaker Recognition Challenges (VoxSRC) were a series of challenges and workshops that ran annually from 2019 to 2023.
The challenges primarily evaluated the tasks of speaker recognition and diarisation under various settings.
We provide a review of these challenges that covers: what they explored; the methods developed by the challenge participants and how these evolved.
arXiv Detail & Related papers (2024-08-27T08:57:31Z) - Overview of AI-Debater 2023: The Challenges of Argument Generation Tasks [62.443665295250035]
We present the results of the AI-Debater 2023 Challenge held by the Chinese Conference on Affect Computing (CCAC 2023)
In total, 32 competing teams register for the challenge, from which we received 11 successful submissions.
arXiv Detail & Related papers (2024-07-20T10:13:54Z) - The Second DISPLACE Challenge : DIarization of SPeaker and LAnguage in Conversational Environments [28.460119283649913]
The dataset contains 158 hours of speech, consisting of both supervised and unsupervised mono-channel far-field recordings.
12 hours of close-field mono-channel recordings were provided for the ASR track conducted on 5 Indian languages.
We have compared our baseline models and the team's performances on evaluation data of DISPLACE-2023 to emphasize the advancements made in this second version of the challenge.
arXiv Detail & Related papers (2024-06-13T17:32:32Z) - Foundational Challenges in Assuring Alignment and Safety of Large Language Models [171.01569693871676]
This work identifies 18 foundational challenges in assuring the alignment and safety of large language models (LLMs)
Based on the identified challenges, we pose $200+$ concrete research questions.
arXiv Detail & Related papers (2024-04-15T16:58:28Z) - VoxSRC 2022: The Fourth VoxCeleb Speaker Recognition Challenge [95.6159736804855]
The VoxCeleb Speaker Recognition Challenge 2022 (VoxSRC-22) was held in conjunction with INTERSPEECH 2022.
The goal of this challenge was to evaluate how well state-of-the-art speaker recognition systems can diarise and recognise speakers from speech obtained "in the wild"
arXiv Detail & Related papers (2023-02-20T19:27:14Z) - Summary on the ISCSLP 2022 Chinese-English Code-Switching ASR Challenge [25.69349931845173]
The ISCSLP 2022 CSASR challenge provided two training sets, TAL_CSASR corpus and MagicData-RAMC corpus, a development and a test set for participants.
More than 40 teams participated in this challenge, and the winner team achieved 16.70% Mixture Error Rate (MER) performance on the test set.
In this paper, we will describe the datasets, the associated baselines system and the requirements, and summarize the CSASR challenge results and major techniques and tricks used in the submitted systems.
arXiv Detail & Related papers (2022-10-12T11:05:13Z) - L3DAS22 Challenge: Learning 3D Audio Sources in a Real Office
Environment [12.480610577162478]
The L3DAS22 Challenge is aimed at encouraging the development of machine learning strategies for 3D speech enhancement and 3D sound localization and detection.
This challenge improves and extends the tasks of the L3DAS21 edition.
arXiv Detail & Related papers (2022-02-21T17:05:39Z) - OLR 2021 Challenge: Datasets, Rules and Baselines [23.878103387338918]
The data profile, four tasks, two baselines, and the evaluation principles are introduced in this paper.
In addition to the Language Identification (LID) tasks, multilingual Automatic Speech Recognition (ASR) tasks are introduced to OLR 2021 Challenge for the first time.
arXiv Detail & Related papers (2021-07-23T09:57:29Z) - Oriental Language Recognition (OLR) 2020: Summary and Analysis [21.212345251874513]
The fifth Oriental Language Recognition (OLR) Challenge focuses on language recognition in a variety of complex environments.
This paper describes the three tasks, the database profile, and the final results of the challenge.
arXiv Detail & Related papers (2021-07-05T12:42:40Z) - Analysing Affective Behavior in the First ABAW 2020 Competition [49.90617840789334]
The Affective Behavior Analysis in-the-wild (ABAW) 2020 Competition is the first Competition aiming at automatic analysis of the three main behavior tasks.
We describe this Competition, to be held in conjunction with the IEEE Conference on Face and Gesture Recognition, May 2020, in Buenos Aires, Argentina.
We outline the evaluation metrics, present both the baseline system and the top-3 performing teams' methodologies per Challenge and finally present their obtained results.
arXiv Detail & Related papers (2020-01-30T15:41:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.