Emerging AI Security Threats for Autonomous Cars -- Case Studies
- URL: http://arxiv.org/abs/2109.04865v1
- Date: Fri, 10 Sep 2021 13:22:21 GMT
- Title: Emerging AI Security Threats for Autonomous Cars -- Case Studies
- Authors: Shanthi Lekkala, Tanya Motwani, Manojkumar Parmar, Amit Phadke
- Abstract summary: We discuss model extraction attacks and a generic kill-chain that can compromise autonomous cars.
It is essential to investigate strategies to manage and mitigate the risk of model theft.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Artificial Intelligence has made a significant contribution to autonomous
vehicles, from object detection to path planning. However, AI models require a
large amount of sensitive training data and are usually computationally
intensive to build. The commercial value of such models motivates attackers to
mount various attacks. Adversaries can launch model extraction attacks for
monetization purposes or step-ping-stone towards other attacks like model
evasion. In specific cases, it even results in destroying brand reputation,
differentiation, and value proposition. In addition, IP laws and AI-related
legalities are still evolving and are not uniform across countries. We discuss
model extraction attacks in detail with two use-cases and a generic kill-chain
that can compromise autonomous cars. It is essential to investigate strategies
to manage and mitigate the risk of model theft.
Related papers
- Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - L-AutoDA: Leveraging Large Language Models for Automated Decision-based Adversarial Attacks [16.457528502745415]
This work introduces L-AutoDA, a novel approach leveraging the generative capabilities of Large Language Models (LLMs) to automate the design of adversarial attacks.
By iteratively interacting with LLMs in an evolutionary framework, L-AutoDA automatically designs competitive attack algorithms efficiently without much human effort.
We demonstrate the efficacy of L-AutoDA on CIFAR-10 dataset, showing significant improvements over baseline methods in both success rate and computational efficiency.
arXiv Detail & Related papers (2024-01-27T07:57:20Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - I Know What You Trained Last Summer: A Survey on Stealing Machine
Learning Models and Defences [0.1031296820074812]
We study model stealing attacks, assessing their performance and exploring corresponding defence techniques in different settings.
We propose a taxonomy for attack and defence approaches, and provide guidelines on how to select the right attack or defence based on the goal and available resources.
arXiv Detail & Related papers (2022-06-16T21:16:41Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Efficacy of Statistical and Artificial Intelligence-based False
Information Cyberattack Detection Models for Connected Vehicles [4.058429227214047]
Connected vehicles (CVs) are vulnerable to cyberattacks that can instantly compromise the safety of the vehicle itself and other connected vehicles and roadway infrastructure.
In this paper, we have evaluated three change point-based statistical models for cyberattack detection in the CV data.
We have used six AI models to detect false information attacks and compared the performance for detecting the attacks with our developed change point models.
arXiv Detail & Related papers (2021-08-02T18:50:12Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - An Empirical Review of Adversarial Defenses [0.913755431537592]
Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
arXiv Detail & Related papers (2020-12-10T09:34:41Z) - Adversarial Imitation Attack [63.76805962712481]
A practical adversarial attack should require as little as possible knowledge of attacked models.
Current substitute attacks need pre-trained models to generate adversarial examples.
In this study, we propose a novel adversarial imitation attack.
arXiv Detail & Related papers (2020-03-28T10:02:49Z) - An Analysis of Adversarial Attacks and Defenses on Autonomous Driving
Models [15.007794089091616]
Convolutional neural network (CNN) is a key component in autonomous driving.
Previous work shows CNN-based classification models are vulnerable to adversarial attacks.
This paper presents an in-depth analysis of five adversarial attacks and four defense methods on three driving models.
arXiv Detail & Related papers (2020-02-06T09:49:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.