Acoustic Cybersecurity: Exploiting Voice-Activated Systems
- URL: http://arxiv.org/abs/2312.00039v1
- Date: Thu, 23 Nov 2023 02:26:11 GMT
- Title: Acoustic Cybersecurity: Exploiting Voice-Activated Systems
- Authors: Forrest McKee and David Noever
- Abstract summary: Our research extends the feasibility of these attacks across various platforms like Amazon's Alexa, Android, iOS, and Cortana.
We quantitatively show that attack success rates hover around 60%, with the ability to activate devices remotely from over 100 feet away.
These attacks threaten critical infrastructure, emphasizing the need for multifaceted defensive strategies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study, we investigate the emerging threat of inaudible acoustic
attacks targeting digital voice assistants, a critical concern given their
projected prevalence to exceed the global population by 2024. Our research
extends the feasibility of these attacks across various platforms like Amazon's
Alexa, Android, iOS, and Cortana, revealing significant vulnerabilities in
smart devices. The twelve attack vectors identified include successful
manipulation of smart home devices and automotive systems, potential breaches
in military communication, and challenges in critical infrastructure security.
We quantitatively show that attack success rates hover around 60%, with the
ability to activate devices remotely from over 100 feet away. Additionally,
these attacks threaten critical infrastructure, emphasizing the need for
multifaceted defensive strategies combining acoustic shielding, advanced signal
processing, machine learning, and robust user authentication to mitigate these
risks.
Related papers
- Exploring Vulnerabilities and Protections in Large Language Models: A Survey [1.6179784294541053]
This survey examines the security challenges of Large Language Models (LLMs)
It focuses on two main areas: Prompt Hacking and Adversarial Attacks.
By detailing these security issues, the survey contributes to the broader discussion on creating resilient AI systems.
arXiv Detail & Related papers (2024-06-01T00:11:09Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - A Practical Survey on Emerging Threats from AI-driven Voice Attacks: How Vulnerable are Commercial Voice Control Systems? [13.115517847161428]
AI-driven audio attacks have revealed new security vulnerabilities in voice control systems.
Our study endeavors to assess the resilience of commercial voice control systems against a spectrum of malicious audio attacks.
Our results suggest that commercial voice control systems exhibit enhanced resistance to existing threats.
arXiv Detail & Related papers (2023-12-10T21:51:13Z) - Adversarial Agents For Attacking Inaudible Voice Activated Devices [0.0]
The paper applies reinforcement learning to novel Internet of Thing configurations.
Our analysis of inaudible attacks on voice-activated devices confirms the alarming risk factor of 7.6 out of 10.
By 2024, this new attack surface might encompass more digital voice assistants than people on the planet.
arXiv Detail & Related papers (2023-07-23T02:18:30Z) - Push-Pull: Characterizing the Adversarial Robustness for Audio-Visual
Active Speaker Detection [88.74863771919445]
We reveal the vulnerability of AVASD models under audio-only, visual-only, and audio-visual adversarial attacks.
We also propose a novel audio-visual interaction loss (AVIL) for making attackers difficult to find feasible adversarial examples.
arXiv Detail & Related papers (2022-10-03T08:10:12Z) - SoK: A Study of the Security on Voice Processing Systems [2.596028864336544]
We will identify and classify an arrangement of unique attacks on voice processing systems.
The current and most frequently used machine learning systems and deep neural networks are at the core of modern voice processing systems.
arXiv Detail & Related papers (2021-12-24T21:47:06Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - SHARKS: Smart Hacking Approaches for RisK Scanning in Internet-of-Things
and Cyber-Physical Systems based on Machine Learning [5.265938973293016]
Cyber-physical systems (CPS) and Internet-of-Things (IoT) devices are increasingly being deployed across multiple functionalities.
These devices are inherently not secure across their comprehensive software, hardware, and network stacks.
We present an innovative technique for detecting unknown system vulnerabilities, managing these vulnerabilities, and improving incident response.
arXiv Detail & Related papers (2021-01-07T22:01:30Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - Challenges and Countermeasures for Adversarial Attacks on Deep
Reinforcement Learning [48.49658986576776]
Deep Reinforcement Learning (DRL) has numerous applications in the real world thanks to its outstanding ability in adapting to the surrounding environments.
Despite its great advantages, DRL is susceptible to adversarial attacks, which precludes its use in real-life critical systems and applications.
This paper presents emerging attacks in DRL-based systems and the potential countermeasures to defend against these attacks.
arXiv Detail & Related papers (2020-01-27T10:53:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.