Robust Sensor Fusion Algorithms Against VoiceCommand Attacks in
Autonomous Vehicles
- URL: http://arxiv.org/abs/2104.09872v1
- Date: Tue, 20 Apr 2021 10:08:46 GMT
- Title: Robust Sensor Fusion Algorithms Against VoiceCommand Attacks in
Autonomous Vehicles
- Authors: Jiwei Guan, Xi Zheng, Chen Wang, Yipeng Zhou and Alireza Jolfa
- Abstract summary: We propose a novel multimodal deep learning classification system to defend against inaudible command attacks.
Our experimental results confirm the feasibility of the proposed defense methods and the best classification accuracy reaches 89.2%.
- Score: 8.35945218644081
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With recent advances in autonomous driving, Voice Control Systems have become
increasingly adopted as human-vehicle interaction methods. This technology
enables drivers to use voice commands to control the vehicle and will be soon
available in Advanced Driver Assistance Systems (ADAS). Prior work has shown
that Siri, Alexa and Cortana, are highly vulnerable to inaudible command
attacks. This could be extended to ADAS in real-world applications and such
inaudible command threat is difficult to detect due to microphone
nonlinearities. In this paper, we aim to develop a more practical solution by
using camera views to defend against inaudible command attacks where ADAS are
capable of detecting their environment via multi-sensors. To this end, we
propose a novel multimodal deep learning classification system to defend
against inaudible command attacks. Our experimental results confirm the
feasibility of the proposed defense methods and the best classification
accuracy reaches 89.2%. Code is available at
https://github.com/ITSEG-MQ/Sensor-Fusion-Against-VoiceCommand-Attacks.
Related papers
- VocalCrypt: Novel Active Defense Against Deepfake Voice Based on Masking Effect [2.417762825674103]
rapid advancements in AI voice cloning, fueled by machine learning, have significantly impacted text-to-speech (TTS) and voice conversion (VC) fields.
We propose a novel active defense method, VocalCrypt, which embeds pseudo-timbre (jamming information) based on SFS into audio segments that are imperceptible to the human ear.
In comparison to existing methods, such as adversarial noise incorporation, VocalCrypt significantly enhances robustness and real-time performance.
arXiv Detail & Related papers (2025-02-14T17:43:01Z) - Evaluating Synthetic Command Attacks on Smart Voice Assistants [2.91784559412979]
We show that even simple concatenative speech synthesis can be used by an attacker to command voice assistants to perform sensitive operations.
Our results demonstrate the need for better defenses against synthetic malicious commands that could target voice assistants.
arXiv Detail & Related papers (2024-11-13T03:51:58Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - NUANCE: Near Ultrasound Attack On Networked Communication Environments [0.0]
This study investigates a primary inaudible attack vector on Amazon Alexa voice services using near ultrasound trojans.
The research maps each attack vector to a tactic or technique from the MITRE ATT&CK matrix.
The experiment involved generating and surveying fifty near-ultrasonic audios to assess the attacks' effectiveness.
arXiv Detail & Related papers (2023-04-25T23:28:46Z) - Can AI-Generated Text be Reliably Detected? [50.95804851595018]
Large Language Models (LLMs) perform impressively well in various applications.
The potential for misuse of these models in activities such as plagiarism, generating fake news, and spamming has raised concern about their responsible use.
We stress-test the robustness of these AI text detectors in the presence of an attacker.
arXiv Detail & Related papers (2023-03-17T17:53:19Z) - Push-Pull: Characterizing the Adversarial Robustness for Audio-Visual
Active Speaker Detection [88.74863771919445]
We reveal the vulnerability of AVASD models under audio-only, visual-only, and audio-visual adversarial attacks.
We also propose a novel audio-visual interaction loss (AVIL) for making attackers difficult to find feasible adversarial examples.
arXiv Detail & Related papers (2022-10-03T08:10:12Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Using Soft Actor-Critic for Low-Level UAV Control [0.0]
We present a framework to train the Soft Actor-Critic (SAC) algorithm to low-level control of a quadrotor in a go-to-target task.
SAC can not only learn a robust policy, but it can also cope with unseen scenarios.
arXiv Detail & Related papers (2020-10-05T19:16:57Z) - Unmanned Aerial Vehicle Control Through Domain-based Automatic Speech
Recognition [0.0]
We present a domain-based speech recognition architecture to control an unmanned aerial vehicle such as a drone.
The drone control is performed using a more natural, human-like way to communicate the instructions.
We implement an algorithm for command interpretation using both Spanish and English languages.
arXiv Detail & Related papers (2020-09-09T11:17:45Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.