The VoicePrivacy 2020 Challenge Evaluation Plan
- URL: http://arxiv.org/abs/2205.07123v1
- Date: Sat, 14 May 2022 20:05:51 GMT
- Title: The VoicePrivacy 2020 Challenge Evaluation Plan
- Authors: Natalia Tomashenko, Brij Mohan Lal Srivastava, Xin Wang, Emmanuel
Vincent, Andreas Nautsch, Junichi Yamagishi, Nicholas Evans, Jose Patino,
Jean-Fran\c{c}ois Bonastre, Paul-Gauthier No\'e, Massimiliano Todisco
- Abstract summary: The VoicePrivacy Challenge aims to promote the development of privacy preservation tools for speech technology.
We formulate the voice anonymization task selected for the VoicePrivacy 2020 Challenge and describe the datasets used for system development and evaluation.
- Score: 53.14981205333593
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The VoicePrivacy Challenge aims to promote the development of privacy
preservation tools for speech technology by gathering a new community to define
the tasks of interest and the evaluation methodology, and benchmarking
solutions through a series of challenges. In this document, we formulate the
voice anonymization task selected for the VoicePrivacy 2020 Challenge and
describe the datasets used for system development and evaluation. We also
present the attack models and the associated objective and subjective
evaluation metrics. We introduce two anonymization baselines and report
objective evaluation results.
Related papers
- The First VoicePrivacy Attacker Challenge Evaluation Plan [39.256453635652484]
The First VoicePrivacy Attacker Challenge is a new kind of challenge organized as part of the VoicePrivacy initiative and supported by ICASSP 2025 as the SP Grand Challenge.
It focuses on developing attacker systems against voice anonymization, which will be evaluated against a set of anonymization systems submitted to the VoicePrivacy 2024 Challenge.
arXiv Detail & Related papers (2024-10-09T20:48:03Z) - The VoicePrivacy 2024 Challenge Evaluation Plan [40.2768875178317]
The challenge is to develop a voice anonymization system which conceals the speaker's voice identity while protecting linguistic content and emotional states.
Participants apply their developed anonymization systems, run evaluation scripts and submit evaluation results and anonymized speech data to the organizers.
Results will be presented at a workshop held in conjunction with Interspeech 2024.
arXiv Detail & Related papers (2024-04-03T12:20:51Z) - The VoicePrivacy 2022 Challenge Evaluation Plan [46.807999940446294]
Training, development and evaluation datasets are provided.
Participants apply their developed anonymization systems.
Results will be presented at a workshop held in conjunction with INTERSPEECH 2022.
arXiv Detail & Related papers (2022-03-23T15:05:18Z) - The VoicePrivacy 2020 Challenge: Results and findings [60.13468541150838]
The first VoicePrivacy 2020 Challenge focuses on developing anonymization solutions for speech technology.
We provide a systematic overview of the challenge design with an analysis of submitted systems and evaluation results.
arXiv Detail & Related papers (2021-09-01T23:40:38Z) - ASVspoof 2021: Automatic Speaker Verification Spoofing and
Countermeasures Challenge Evaluation Plan [70.45884214674057]
ASVspoof 2021 is the 4th in a series of bi-annual, competitive challenges.
The goal is to develop countermeasures capable of discriminating between bona fide and spoofed or deepfake speech.
arXiv Detail & Related papers (2021-09-01T15:32:28Z) - Introducing the VoicePrivacy Initiative [53.14981205333593]
The VoicePrivacy initiative aims to promote the development of privacy preservation tools for speech technology.
We formulate the voice anonymization task selected for the VoicePrivacy 2020 Challenge and describe the datasets used for system development and evaluation.
arXiv Detail & Related papers (2020-05-04T11:07:52Z) - A Revised Generative Evaluation of Visual Dialogue [80.17353102854405]
We propose a revised evaluation scheme for the VisDial dataset.
We measure consensus between answers generated by the model and a set of relevant answers.
We release these sets and code for the revised evaluation scheme as DenseVisDial.
arXiv Detail & Related papers (2020-04-20T13:26:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.