Adversarial Agents For Attacking Inaudible Voice Activated Devices
- URL: http://arxiv.org/abs/2307.12204v2
- Date: Tue, 25 Jul 2023 15:16:40 GMT
- Title: Adversarial Agents For Attacking Inaudible Voice Activated Devices
- Authors: Forrest McKee and David Noever
- Abstract summary: The paper applies reinforcement learning to novel Internet of Thing configurations.
Our analysis of inaudible attacks on voice-activated devices confirms the alarming risk factor of 7.6 out of 10.
By 2024, this new attack surface might encompass more digital voice assistants than people on the planet.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The paper applies reinforcement learning to novel Internet of Thing
configurations. Our analysis of inaudible attacks on voice-activated devices
confirms the alarming risk factor of 7.6 out of 10, underlining significant
security vulnerabilities scored independently by NIST National Vulnerability
Database (NVD). Our baseline network model showcases a scenario in which an
attacker uses inaudible voice commands to gain unauthorized access to
confidential information on a secured laptop. We simulated many attack
scenarios on this baseline network model, revealing the potential for mass
exploitation of interconnected devices to discover and own privileged
information through physical access without adding new hardware or amplifying
device skills. Using Microsoft's CyberBattleSim framework, we evaluated six
reinforcement learning algorithms and found that Deep-Q learning with
exploitation proved optimal, leading to rapid ownership of all nodes in fewer
steps. Our findings underscore the critical need for understanding
non-conventional networks and new cybersecurity measures in an ever-expanding
digital landscape, particularly those characterized by mobile devices, voice
activation, and non-linear microphones susceptible to malicious actors
operating stealth attacks in the near-ultrasound or inaudible ranges. By 2024,
this new attack surface might encompass more digital voice assistants than
people on the planet yet offer fewer remedies than conventional patching or
firmware fixes since the inaudible attacks arise inherently from the microphone
design and digital signal processing.
Related papers
- Countering Autonomous Cyber Threats [40.00865970939829]
Foundation Models present dual-use concerns broadly and within the cyber domain specifically.
Recent research has shown the potential for these advanced models to inform or independently execute offensive cyberspace operations.
This work evaluates several state-of-the-art FMs on their ability to compromise machines in an isolated network and investigates defensive mechanisms to defeat such AI-powered attacks.
arXiv Detail & Related papers (2024-10-23T22:46:44Z) - Toward Mixture-of-Experts Enabled Trustworthy Semantic Communication for 6G Networks [82.3753728955968]
We introduce a novel Mixture-of-Experts (MoE)-based SemCom system.
This system comprises a gating network and multiple experts, each specializing in different security challenges.
The gating network adaptively selects suitable experts to counter heterogeneous attacks based on user-defined security requirements.
A case study in vehicular networks demonstrates the efficacy of the MoE-based SemCom system.
arXiv Detail & Related papers (2024-09-24T03:17:51Z) - Enhancing Privacy and Security of Autonomous UAV Navigation [0.8512184778338805]
In critical scenarios such as border protection or disaster response, ensuring the secure navigation of autonomous UAVs is paramount.
We propose an innovative approach that combines Reinforcement Learning (RL) and Fully Homomorphic Encryption (FHE) for secure autonomous UAV navigation.
Our proposed approach ensures security and privacy in autonomous UAV navigation with negligible loss in performance.
arXiv Detail & Related papers (2024-04-26T07:54:04Z) - Acoustic Cybersecurity: Exploiting Voice-Activated Systems [0.0]
Our research extends the feasibility of these attacks across various platforms like Amazon's Alexa, Android, iOS, and Cortana.
We quantitatively show that attack success rates hover around 60%, with the ability to activate devices remotely from over 100 feet away.
These attacks threaten critical infrastructure, emphasizing the need for multifaceted defensive strategies.
arXiv Detail & Related papers (2023-11-23T02:26:11Z) - Is Semantic Communications Secure? A Tale of Multi-Domain Adversarial
Attacks [70.51799606279883]
We introduce test-time adversarial attacks on deep neural networks (DNNs) for semantic communications.
We show that it is possible to change the semantics of the transferred information even when the reconstruction loss remains low.
arXiv Detail & Related papers (2022-12-20T17:13:22Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Attacking Deep Learning AI Hardware with Universal Adversarial
Perturbation [0.0]
Universal Adversarial Perturbations can seriously jeopardize the security and integrity of practical Deep Learning applications.
We demonstrate an attack strategy that when activated by rogue means (e.g., malware, trojan) can bypass existing countermeasures.
arXiv Detail & Related papers (2021-11-18T02:54:10Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - TANTRA: Timing-Based Adversarial Network Traffic Reshaping Attack [46.79557381882643]
We present TANTRA, a novel end-to-end Timing-based Adversarial Network Traffic Reshaping Attack.
Our evasion attack utilizes a long short-term memory (LSTM) deep neural network (DNN) which is trained to learn the time differences between the target network's benign packets.
TANTRA achieves an average success rate of 99.99% in network intrusion detection system evasion.
arXiv Detail & Related papers (2021-03-10T19:03:38Z) - Paralinguistic Privacy Protection at the Edge [5.349852254138085]
We introduce EDGY, a representation learning framework that transforms and filters high-dimensional voice data to identify and contain sensitive attributes at the edge prior to offloading to the cloud.
Our results show that EDGY runs in tens of milliseconds with 0.2% relative improvement in ABX score or minimal performance penalties in learning linguistic representations from raw voice signals.
arXiv Detail & Related papers (2020-11-04T14:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.