The VoicePrivacy 2020 Challenge: Results and findings
- URL: http://arxiv.org/abs/2109.00648v1
- Date: Wed, 1 Sep 2021 23:40:38 GMT
- Title: The VoicePrivacy 2020 Challenge: Results and findings
- Authors: Natalia Tomashenko, Xin Wang, Emmanuel Vincent, Jose Patino, Brij
Mohan Lal Srivastava, Paul-Gauthier No\'e, Andreas Nautsch, Nicholas Evans,
Junichi Yamagishi, Benjamin O'Brien, Ana\"is Chanclu, Jean-Fran\c{c}ois
Bonastre, Massimiliano Todisco, Mohamed Maouche
- Abstract summary: The first VoicePrivacy 2020 Challenge focuses on developing anonymization solutions for speech technology.
We provide a systematic overview of the challenge design with an analysis of submitted systems and evaluation results.
- Score: 60.13468541150838
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents the results and analyses stemming from the first
VoicePrivacy 2020 Challenge which focuses on developing anonymization solutions
for speech technology. We provide a systematic overview of the challenge design
with an analysis of submitted systems and evaluation results. In particular, we
describe the voice anonymization task and datasets used for system development
and evaluation. Also, we present different attack models and the associated
objective and subjective evaluation metrics. We introduce two anonymization
baselines and provide a summary description of the anonymization systems
developed by the challenge participants. We report objective and subjective
evaluation results for baseline and submitted systems. In addition, we present
experimental results for alternative privacy metrics and attack models
developed as a part of the post-evaluation analysis. Finally, we summarize our
insights and observations that will influence the design of the next
VoicePrivacy challenge edition and some directions for future voice
anonymization research.
Related papers
- Towards Personalized Evaluation of Large Language Models with An
Anonymous Crowd-Sourcing Platform [64.76104135495576]
We propose a novel anonymous crowd-sourcing evaluation platform, BingJian, for large language models.
Through this platform, users have the opportunity to submit their questions, testing the models on a personalized and potentially broader range of capabilities.
arXiv Detail & Related papers (2024-03-13T07:31:20Z) - Voice Anonymization for All -- Bias Evaluation of the Voice Privacy
Challenge Baseline System [0.48342038441006807]
This study investigates bias in voice anonymization systems within the context of the Voice Privacy Challenge.
We curate a novel benchmark dataset to assess performance disparities among speaker subgroups based on sex and dialect.
arXiv Detail & Related papers (2023-11-27T13:26:49Z) - The VoicePrivacy 2020 Challenge Evaluation Plan [53.14981205333593]
The VoicePrivacy Challenge aims to promote the development of privacy preservation tools for speech technology.
We formulate the voice anonymization task selected for the VoicePrivacy 2020 Challenge and describe the datasets used for system development and evaluation.
arXiv Detail & Related papers (2022-05-14T20:05:51Z) - The VoicePrivacy 2022 Challenge Evaluation Plan [46.807999940446294]
Training, development and evaluation datasets are provided.
Participants apply their developed anonymization systems.
Results will be presented at a workshop held in conjunction with INTERSPEECH 2022.
arXiv Detail & Related papers (2022-03-23T15:05:18Z) - Evaluation of Summarization Systems across Gender, Age, and Race [0.0]
We show that summary evaluation is sensitive to protected attributes.
This can severely bias system development and evaluation, leading us to build models that cater for some groups rather than others.
arXiv Detail & Related papers (2021-10-08T21:30:20Z) - Introducing the VoicePrivacy Initiative [53.14981205333593]
The VoicePrivacy initiative aims to promote the development of privacy preservation tools for speech technology.
We formulate the voice anonymization task selected for the VoicePrivacy 2020 Challenge and describe the datasets used for system development and evaluation.
arXiv Detail & Related papers (2020-05-04T11:07:52Z) - A Revised Generative Evaluation of Visual Dialogue [80.17353102854405]
We propose a revised evaluation scheme for the VisDial dataset.
We measure consensus between answers generated by the model and a set of relevant answers.
We release these sets and code for the revised evaluation scheme as DenseVisDial.
arXiv Detail & Related papers (2020-04-20T13:26:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.