Improving the Speaker Anonymization Evaluation's Robustness to Target Speakers with Adversarial Learning
- URL: http://arxiv.org/abs/2508.09803v1
- Date: Wed, 13 Aug 2025 13:38:09 GMT
- Title: Improving the Speaker Anonymization Evaluation's Robustness to Target Speakers with Adversarial Learning
- Authors: Carlos Franzreb, Arnab Das, Tim Polzehl, Sebastian Möller,
- Abstract summary: We propose to add a target classifier that measures the influence of target speaker information in the evaluation.<n>Experiments demonstrate that this approach is effective for multiple anonymizers.
- Score: 12.642704894600602
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The current privacy evaluation for speaker anonymization often overestimates privacy when a same-gender target selection algorithm (TSA) is used, although this TSA leaks the speaker's gender and should hence be more vulnerable. We hypothesize that this occurs because the evaluation does not account for the fact that anonymized speech contains information from both the source and target speakers. To address this, we propose to add a target classifier that measures the influence of target speaker information in the evaluation, which can also be removed with adversarial learning. Experiments demonstrate that this approach is effective for multiple anonymizers, particularly when using a same-gender TSA, leading to a more reliable assessment.
Related papers
- Target speaker anonymization in multi-speaker recordings [35.23403922131853]
This study addresses the significant challenge of speaker anonymization within multi-speaker conversational audio.<n>This scenario is highly relevant in contexts like call centers, where customer privacy necessitates anonymizing only the customer's voice.<n>This work aims to bridge these gaps by exploring effective strategies for targeted speaker anonymization in conversational audio.
arXiv Detail & Related papers (2025-10-10T11:59:45Z) - VoxGuard: Evaluating User and Attribute Privacy in Speech via Membership Inference Attacks [51.68795949691009]
We introduce VoxGuard, a framework grounded in differential privacy and membership inference.<n>For attributes, we show that simple transparent attacks recover gender and accent with near-perfect accuracy even after anonymization.<n>Our results demonstrate that EER substantially underestimates leakage, highlighting the need for low-FPR evaluation.
arXiv Detail & Related papers (2025-09-22T20:57:48Z) - Multi-Target Backdoor Attacks Against Speaker Recognition [60.8399833165557]
We propose a multi-target backdoor attack against speaker identification using position-independent clicking sounds.<n>Our method targets up to 50 speakers simultaneously, achieving success rates of up to 95.04%.
arXiv Detail & Related papers (2025-08-12T01:52:30Z) - Anonymizing Speech: Evaluating and Designing Speaker Anonymization
Techniques [1.2691047660244337]
The growing use of voice user interfaces has led to a surge in the collection and storage of speech data.
This thesis proposes solutions for anonymizing speech and evaluating the degree of the anonymization.
arXiv Detail & Related papers (2023-08-05T16:14:17Z) - When the Majority is Wrong: Modeling Annotator Disagreement for Subjective Tasks [45.14664901245331]
A crucial problem in hate speech detection is determining whether a statement is offensive to a demographic group.
We construct a model that predicts individual annotator ratings on potentially offensive text.
We find that annotator ratings can be predicted using their demographic information and opinions on online content.
arXiv Detail & Related papers (2023-05-11T07:55:20Z) - Evaluation of Speaker Anonymization on Emotional Speech [9.223908421919733]
Speech data carries a range of personal information, such as the speaker's identity and emotional state.
Current studies have addressed the topic of preserving speech privacy.
The VoicePrivacy 2020 Challenge (VPC) is about speaker anonymization.
arXiv Detail & Related papers (2023-04-15T20:50:29Z) - Differentially Private Speaker Anonymization [44.90119821614047]
Sharing real-world speech utterances is key to the training and deployment of voice-based services.
Speaker anonymization aims to remove speaker information from a speech utterance while leaving its linguistic and prosodic attributes intact.
We show that disentanglement is indeed not perfect: linguistic and prosodic attributes still contain speaker information.
arXiv Detail & Related papers (2022-02-23T23:20:30Z) - Membership Inference Attacks Against Self-supervised Speech Models [62.73937175625953]
Self-supervised learning (SSL) on continuous speech has started gaining attention.
We present the first privacy analysis on several SSL speech models using Membership Inference Attacks (MIA) under black-box access.
arXiv Detail & Related papers (2021-11-09T13:00:24Z) - Evaluating X-vector-based Speaker Anonymization under White-box
Assessment [0.0]
In the scenario of the Voice Privacy challenge, anonymization is achieved by converting all utterances from a source speaker to match the same target identity.
This article proposed to constrain the target selection to a specific identity to evaluate the extreme threat under a whitebox assessment.
arXiv Detail & Related papers (2021-09-24T13:08:07Z) - Speaker De-identification System using Autoencoders and Adversarial
Training [58.720142291102135]
We propose a speaker de-identification system based on adversarial training and autoencoders.
Experimental results show that combining adversarial learning and autoencoders increase the equal error rate of a speaker verification system.
arXiv Detail & Related papers (2020-11-09T19:22:05Z) - Design Choices for X-vector Based Speaker Anonymization [48.46018902334472]
We present a flexible pseudo-speaker selection technique as a baseline for the first VoicePrivacy Challenge.
Experiments are performed using datasets derived from LibriSpeech to find the optimal combination of design choices in terms of privacy and utility.
arXiv Detail & Related papers (2020-05-18T11:32:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.