Practical Attacks on Voice Spoofing Countermeasures
- URL: http://arxiv.org/abs/2107.14642v1
- Date: Fri, 30 Jul 2021 14:07:49 GMT
- Title: Practical Attacks on Voice Spoofing Countermeasures
- Authors: Andre Kassis and Urs Hengartner
- Abstract summary: We show how a malicious actor may efficiently craft audio samples to bypass voice authentication in its strictest form.
Our results call into question the security of modern voice authentication systems in light of the real threat of attackers bypassing these measures.
- Score: 3.388509725285237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Voice authentication has become an integral part in security-critical
operations, such as bank transactions and call center conversations. The
vulnerability of automatic speaker verification systems (ASVs) to spoofing
attacks instigated the development of countermeasures (CMs), whose task is to
tell apart bonafide and spoofed speech. Together, ASVs and CMs form today's
voice authentication platforms, advertised as an impregnable access control
mechanism. We develop the first practical attack on CMs, and show how a
malicious actor may efficiently craft audio samples to bypass voice
authentication in its strictest form. Previous works have primarily focused on
non-proactive attacks or adversarial strategies against ASVs that do not
produce speech in the victim's voice. The repercussions of our attacks are far
more severe, as the samples we generate sound like the victim, eliminating any
chance of plausible deniability. Moreover, the few existing adversarial attacks
against CMs mistakenly optimize spoofed speech in the feature space and do not
take into account the existence of ASVs, resulting in inferior synthetic audio
that fails in realistic settings. We eliminate these obstacles through our key
technical contribution: a novel joint loss function that enables mounting
advanced adversarial attacks against combined ASV/CM deployments directly in
the time domain. Our adversarials achieve concerning black-box success rates
against state-of-the-art authentication platforms (up to 93.57\%). Finally, we
perform the first targeted, over-telephony-network attack on CMs, bypassing
several challenges and enabling various potential threats, given the increased
use of voice biometrics in call centers. Our results call into question the
security of modern voice authentication systems in light of the real threat of
attackers bypassing these measures to gain access to users' most valuable
resources.
Related papers
- Can DeepFake Speech be Reliably Detected? [17.10792531439146]
This work presents the first systematic study of active malicious attacks against state-of-the-art open-source speech detectors.
The results highlight the urgent need for more robust detection methods in the face of evolving adversarial threats.
arXiv Detail & Related papers (2024-10-09T06:13:48Z) - Generalizing Speaker Verification for Spoof Awareness in the Embedding
Space [30.094557217931563]
ASV systems can be spoofed using various types of adversaries.
We propose a novel yet simple backend classifier based on deep neural networks.
Experiments are conducted on the ASVspoof 2019 logical access dataset.
arXiv Detail & Related papers (2024-01-20T07:30:22Z) - A Practical Survey on Emerging Threats from AI-driven Voice Attacks: How Vulnerable are Commercial Voice Control Systems? [13.115517847161428]
AI-driven audio attacks have revealed new security vulnerabilities in voice control systems.
Our study endeavors to assess the resilience of commercial voice control systems against a spectrum of malicious audio attacks.
Our results suggest that commercial voice control systems exhibit enhanced resistance to existing threats.
arXiv Detail & Related papers (2023-12-10T21:51:13Z) - VOICE-ZEUS: Impersonating Zoom's E2EE-Protected Static Media and Textual Communications via Simple Voice Manipulations [1.7930036479971307]
The current implementation of the authentication ceremony in the Zoom application introduces a potential vulnerability that can make it highly susceptible to impersonation attacks.
The existence of this vulnerability may undermine the integrity of E2EE, posing a potential security risk when E2EE becomes a mandatory feature in the Zoom application.
We show how an attacker can record and reorder snippets of digits to generate a new security code that compromises a future Zoom meeting.
arXiv Detail & Related papers (2023-10-21T02:45:24Z) - The defender's perspective on automatic speaker verification: An
overview [87.83259209657292]
The reliability of automatic speaker verification (ASV) has been undermined by the emergence of spoofing attacks.
The aim of this paper is to provide a thorough and systematic overview of the defense methods used against these types of attacks.
arXiv Detail & Related papers (2023-05-22T08:01:59Z) - Push-Pull: Characterizing the Adversarial Robustness for Audio-Visual
Active Speaker Detection [88.74863771919445]
We reveal the vulnerability of AVASD models under audio-only, visual-only, and audio-visual adversarial attacks.
We also propose a novel audio-visual interaction loss (AVIL) for making attackers difficult to find feasible adversarial examples.
arXiv Detail & Related papers (2022-10-03T08:10:12Z) - Voice Spoofing Countermeasures: Taxonomy, State-of-the-art, experimental
analysis of generalizability, open challenges, and the way forward [2.393661358372807]
We conduct a review of the literature on spoofing detection using hand-crafted features, deep learning, end-to-end, and universal spoofing countermeasure solutions.
We report the performance of these countermeasures on several datasets and evaluate them across corpora.
arXiv Detail & Related papers (2022-10-02T03:53:37Z) - Voting for the right answer: Adversarial defense for speaker
verification [79.10523688806852]
ASV is under the radar of adversarial attacks, which are similar to their original counterparts from human's perception.
We propose the idea of "voting for the right answer" to prevent risky decisions of ASV in blind spot areas.
Experimental results show that our proposed method improves the robustness against both the limited-knowledge attackers.
arXiv Detail & Related papers (2021-06-15T04:05:28Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z) - Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised
Learning [71.17774313301753]
We explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks.
Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples.
arXiv Detail & Related papers (2020-06-05T03:03:06Z) - Defense against adversarial attacks on spoofing countermeasures of ASV [95.87555881176529]
This paper introduces a passive defense method, spatial smoothing, and a proactive defense method, adversarial training, to mitigate the vulnerability of ASV spoofing countermeasure models.
The experimental results show that these two defense methods positively help spoofing countermeasure models counter adversarial examples.
arXiv Detail & Related papers (2020-03-06T08:08:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.