Introducing the VoicePrivacy Initiative
- URL: http://arxiv.org/abs/2005.01387v3
- Date: Tue, 11 Aug 2020 22:02:45 GMT
- Title: Introducing the VoicePrivacy Initiative
- Authors: Natalia Tomashenko, Brij Mohan Lal Srivastava, Xin Wang, Emmanuel
Vincent, Andreas Nautsch, Junichi Yamagishi, Nicholas Evans, Jose Patino,
Jean-Fran\c{c}ois Bonastre, Paul-Gauthier No\'e, Massimiliano Todisco
- Abstract summary: The VoicePrivacy initiative aims to promote the development of privacy preservation tools for speech technology.
We formulate the voice anonymization task selected for the VoicePrivacy 2020 Challenge and describe the datasets used for system development and evaluation.
- Score: 53.14981205333593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The VoicePrivacy initiative aims to promote the development of privacy
preservation tools for speech technology by gathering a new community to define
the tasks of interest and the evaluation methodology, and benchmarking
solutions through a series of challenges. In this paper, we formulate the voice
anonymization task selected for the VoicePrivacy 2020 Challenge and describe
the datasets used for system development and evaluation. We also present the
attack models and the associated objective and subjective evaluation metrics.
We introduce two anonymization baselines and report objective evaluation
results.
Related papers
- Recent Trends in Personalized Dialogue Generation: A Review of Datasets, Methodologies, and Evaluations [25.115319934091282]
This paper seeks to survey the recent landscape of personalized dialogue generation.
Covering 22 datasets, we highlight benchmark datasets and newer ones enriched with additional features.
We analyze 17 seminal works from top conferences between 2021-2023 and identify five distinct types of problems.
arXiv Detail & Related papers (2024-05-28T09:04:13Z) - The VoicePrivacy 2024 Challenge Evaluation Plan [40.2768875178317]
The challenge is to develop a voice anonymization system which conceals the speaker's voice identity while protecting linguistic content and emotional states.
Participants apply their developed anonymization systems, run evaluation scripts and submit evaluation results and anonymized speech data to the organizers.
Results will be presented at a workshop held in conjunction with Interspeech 2024.
arXiv Detail & Related papers (2024-04-03T12:20:51Z) - Towards Personalized Evaluation of Large Language Models with An
Anonymous Crowd-Sourcing Platform [64.76104135495576]
We propose a novel anonymous crowd-sourcing evaluation platform, BingJian, for large language models.
Through this platform, users have the opportunity to submit their questions, testing the models on a personalized and potentially broader range of capabilities.
arXiv Detail & Related papers (2024-03-13T07:31:20Z) - The VoicePrivacy 2020 Challenge Evaluation Plan [53.14981205333593]
The VoicePrivacy Challenge aims to promote the development of privacy preservation tools for speech technology.
We formulate the voice anonymization task selected for the VoicePrivacy 2020 Challenge and describe the datasets used for system development and evaluation.
arXiv Detail & Related papers (2022-05-14T20:05:51Z) - The VoicePrivacy 2022 Challenge Evaluation Plan [46.807999940446294]
Training, development and evaluation datasets are provided.
Participants apply their developed anonymization systems.
Results will be presented at a workshop held in conjunction with INTERSPEECH 2022.
arXiv Detail & Related papers (2022-03-23T15:05:18Z) - The VoicePrivacy 2020 Challenge: Results and findings [60.13468541150838]
The first VoicePrivacy 2020 Challenge focuses on developing anonymization solutions for speech technology.
We provide a systematic overview of the challenge design with an analysis of submitted systems and evaluation results.
arXiv Detail & Related papers (2021-09-01T23:40:38Z) - A Revised Generative Evaluation of Visual Dialogue [80.17353102854405]
We propose a revised evaluation scheme for the VisDial dataset.
We measure consensus between answers generated by the model and a set of relevant answers.
We release these sets and code for the revised evaluation scheme as DenseVisDial.
arXiv Detail & Related papers (2020-04-20T13:26:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.