VOICE-ZEUS: Impersonating Zoom's E2EE-Protected Static Media and Textual Communications via Simple Voice Manipulations
- URL: http://arxiv.org/abs/2310.13894v1
- Date: Sat, 21 Oct 2023 02:45:24 GMT
- Title: VOICE-ZEUS: Impersonating Zoom's E2EE-Protected Static Media and Textual Communications via Simple Voice Manipulations
- Authors: Mashari Alatawi, Nitesh Saxena,
- Abstract summary: The current implementation of the authentication ceremony in the Zoom application introduces a potential vulnerability that can make it highly susceptible to impersonation attacks.
The existence of this vulnerability may undermine the integrity of E2EE, posing a potential security risk when E2EE becomes a mandatory feature in the Zoom application.
We show how an attacker can record and reorder snippets of digits to generate a new security code that compromises a future Zoom meeting.
- Score: 1.7930036479971307
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The authentication ceremony plays a crucial role in verifying the identities of users before exchanging messages in end-to-end encryption (E2EE) applications, thus preventing impersonation and man-in-the-middle (MitM) attacks. Once authenticated, the subsequent communications in E2EE apps benefit from the protection provided by the authentication ceremony. However, the current implementation of the authentication ceremony in the Zoom application introduces a potential vulnerability that can make it highly susceptible to impersonation attacks. The existence of this vulnerability may undermine the integrity of E2EE, posing a potential security risk when E2EE becomes a mandatory feature in the Zoom application. In this paper, we examine and evaluate this vulnerability in two attack scenarios, one where the attacker is a malicious participant and another where the attacker is a malicious Zoom server with control over Zoom's server infrastructure and cloud providers. Our study aims to comprehensively examine the Zoom authentication ceremony, with a specific focus on the potential for impersonation attacks in static media and textual communications. We simulate a new session injection attack on Zoom E2EE meetings to evaluate the system's susceptibility to simple voice manipulations. Our simulation experiments show that Zoom's authentication ceremony is vulnerable to a simple voice manipulation, called a VOICE-ZEUS attack, by malicious participants and the malicious Zoom server. In this VOICE-ZEUS attack, an attacker creates a fingerprint in a victim's voice by reordering previously recorded digits spoken by the victim. We show how an attacker can record and reorder snippets of digits to generate a new security code that compromises a future Zoom meeting. We conclude that stronger security measures are necessary during the group authentication ceremony in Zoom to prevent impersonation attacks.
Related papers
- Can DeepFake Speech be Reliably Detected? [17.10792531439146]
This work presents the first systematic study of active malicious attacks against state-of-the-art open-source speech detectors.
The results highlight the urgent need for more robust detection methods in the face of evolving adversarial threats.
arXiv Detail & Related papers (2024-10-09T06:13:48Z) - Nudging Users to Change Breached Passwords Using the Protection Motivation Theory [58.87688846800743]
We draw on the Protection Motivation Theory (PMT) to design nudges that encourage users to change breached passwords.
Our study contributes to PMT's application in security research and provides concrete design implications for improving compromised credential notifications.
arXiv Detail & Related papers (2024-05-24T07:51:15Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Defending Large Language Models against Jailbreak Attacks via Semantic
Smoothing [107.97160023681184]
Aligned large language models (LLMs) are vulnerable to jailbreaking attacks.
We propose SEMANTICSMOOTH, a smoothing-based defense that aggregates predictions of semantically transformed copies of a given input prompt.
arXiv Detail & Related papers (2024-02-25T20:36:03Z) - Fake the Real: Backdoor Attack on Deep Speech Classification via Voice
Conversion [14.264424889358208]
This work explores a backdoor attack that utilizes sample-specific triggers based on voice conversion.
Specifically, we adopt a pre-trained voice conversion model to generate the trigger, ensuring that the poisoned samples does not introduce any additional audible noise.
arXiv Detail & Related papers (2023-06-28T02:19:31Z) - The defender's perspective on automatic speaker verification: An
overview [87.83259209657292]
The reliability of automatic speaker verification (ASV) has been undermined by the emergence of spoofing attacks.
The aim of this paper is to provide a thorough and systematic overview of the defense methods used against these types of attacks.
arXiv Detail & Related papers (2023-05-22T08:01:59Z) - Vulnerabilities of Deep Learning-Driven Semantic Communications to
Backdoor (Trojan) Attacks [70.51799606279883]
This paper highlights vulnerabilities of deep learning-driven semantic communications to backdoor (Trojan) attacks.
Backdoor attack can effectively change the semantic information transferred for poisoned input samples to a target meaning.
Design guidelines are presented to preserve the meaning of transferred information in the presence of backdoor attacks.
arXiv Detail & Related papers (2022-12-21T17:22:27Z) - Conditional Generative Adversarial Network for keystroke presentation
attack [0.0]
We propose to study a new approach aiming to deploy a presentation attack towards a keystroke authentication system.
Our idea is to use Conditional Generative Adversarial Networks (cGAN) for generating synthetic keystroke data that can be used for impersonating an authorized user.
Results indicate that the cGAN can effectively generate keystroke dynamics patterns that can be used for deceiving keystroke authentication systems.
arXiv Detail & Related papers (2022-12-16T12:45:16Z) - Preserving Semantics in Textual Adversarial Attacks [0.0]
Up to 70% of adversarial examples generated by adversarial attacks should be discarded because they do not preserve semantics.
We propose a new, fully supervised sentence embedding technique called Semantics-Preserving-Encoder (SPE)
Our method outperforms existing sentence encoders used in adversarial attacks by achieving 1.2x - 5.1x better real attack success rate.
arXiv Detail & Related papers (2022-11-08T12:40:07Z) - Practical Attacks on Voice Spoofing Countermeasures [3.388509725285237]
We show how a malicious actor may efficiently craft audio samples to bypass voice authentication in its strictest form.
Our results call into question the security of modern voice authentication systems in light of the real threat of attackers bypassing these measures.
arXiv Detail & Related papers (2021-07-30T14:07:49Z) - Backdoor Attack against Speaker Verification [86.43395230456339]
We show that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data.
We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.
arXiv Detail & Related papers (2020-10-22T11:10:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.