SegReConcat: A Data Augmentation Method for Voice Anonymization Attack
- URL: http://arxiv.org/abs/2508.18907v1
- Date: Tue, 26 Aug 2025 10:26:36 GMT
- Title: SegReConcat: A Data Augmentation Method for Voice Anonymization Attack
- Authors: Ridwan Arefeen, Xiaoxiao Miao, Rong Tong, Aik Beng Ng, Simon See,
- Abstract summary: Anonymization of voice seeks to conceal the identity of the speaker while maintaining the utility of speech data.<n>We propose SegReConcat, a data augmentation method for attacker-side enhancement of automatic speaker verification systems.
- Score: 20.139879210234533
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Anonymization of voice seeks to conceal the identity of the speaker while maintaining the utility of speech data. However, residual speaker cues often persist, which pose privacy risks. We propose SegReConcat, a data augmentation method for attacker-side enhancement of automatic speaker verification systems. SegReConcat segments anonymized speech at the word level, rearranges segments using random or similarity-based strategies to disrupt long-term contextual cues, and concatenates them with the original utterance, allowing an attacker to learn source speaker traits from multiple perspectives. The proposed method has been evaluated in the VoicePrivacy Attacker Challenge 2024 framework across seven anonymization systems, SegReConcat improves de-anonymization on five out of seven systems.
Related papers
- Content Anonymization for Privacy in Long-form Audio [9.679458545535388]
Long-form audio is commonplace in domains such as interviews, phone calls, and meetings.<n> given multiple utterances from the same speaker, an attacker could exploit an individual's vocabulary, syntax, and turns of phrase.<n>We propose new content anonymization approaches to address this risk.
arXiv Detail & Related papers (2025-10-14T17:52:50Z) - A Benchmark for Multi-speaker Anonymization [9.990701310620368]
We present an attempt to provide a multi-speaker anonymization benchmark.<n>We also discuss the privacy leakage of overlapping conversations.<n>Experiments conducted on both non-overlap simulated and real-world datasets demonstrate the effectiveness of the multi-speaker anonymization system.
arXiv Detail & Related papers (2024-07-08T04:48:43Z) - Towards Unsupervised Speech Recognition Without Pronunciation Models [57.222729245842054]
In this article, we tackle the challenge of developing ASR systems without paired speech and text corpora.<n>We experimentally demonstrate that an unsupervised speech recognizer can emerge from joint speech-to-speech and text-to-text masked token-infilling.<n>This innovative model surpasses the performance of previous unsupervised ASR models under the lexicon-free setting.
arXiv Detail & Related papers (2024-06-12T16:30:58Z) - Asynchronous Voice Anonymization Using Adversarial Perturbation On Speaker Embedding [46.25816642820348]
We focus on altering the voice attributes against machine recognition while retaining human perception.
A speech generation framework incorporating a speaker disentanglement mechanism is employed to generate the anonymized speech.
Experiments conducted on the LibriSpeech dataset showed that the speaker attributes were obscured with their human perception preserved for 60.71% of the processed utterances.
arXiv Detail & Related papers (2024-06-12T13:33:24Z) - Anonymizing Speech: Evaluating and Designing Speaker Anonymization
Techniques [1.2691047660244337]
The growing use of voice user interfaces has led to a surge in the collection and storage of speech data.
This thesis proposes solutions for anonymizing speech and evaluating the degree of the anonymization.
arXiv Detail & Related papers (2023-08-05T16:14:17Z) - Self-supervised Fine-tuning for Improved Content Representations by
Speaker-invariant Clustering [78.2927924732142]
We propose speaker-invariant clustering (Spin) as a novel self-supervised learning method.
Spin disentangles speaker information and preserves content representations with just 45 minutes of fine-tuning on a single GPU.
arXiv Detail & Related papers (2023-05-18T15:59:36Z) - Protecting gender and identity with disentangled speech representations [49.00162808063399]
We show that protecting gender information in speech is more effective than modelling speaker-identity information.
We present a novel way to encode gender information and disentangle two sensitive biometric identifiers.
arXiv Detail & Related papers (2021-04-22T13:31:41Z) - Speaker De-identification System using Autoencoders and Adversarial
Training [58.720142291102135]
We propose a speaker de-identification system based on adversarial training and autoencoders.
Experimental results show that combining adversarial learning and autoencoders increase the equal error rate of a speaker verification system.
arXiv Detail & Related papers (2020-11-09T19:22:05Z) - Speaker anonymisation using the McAdams coefficient [19.168733328810962]
This paper reports an approach to anonymisation that, unlike other current approaches, requires no training data.
The proposed solution uses the McAdams coefficient to transform the spectral envelope of speech signals.
Results show that random, optimised transformations can outperform competing solutions in terms of anonymisation.
arXiv Detail & Related papers (2020-11-02T17:07:17Z) - Design Choices for X-vector Based Speaker Anonymization [48.46018902334472]
We present a flexible pseudo-speaker selection technique as a baseline for the first VoicePrivacy Challenge.
Experiments are performed using datasets derived from LibriSpeech to find the optimal combination of design choices in terms of privacy and utility.
arXiv Detail & Related papers (2020-05-18T11:32:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.