Model-Agnostic Defense for Lane Detection against Adversarial Attack
- URL: http://arxiv.org/abs/2103.00663v1
- Date: Mon, 1 Mar 2021 00:05:50 GMT
- Title: Model-Agnostic Defense for Lane Detection against Adversarial Attack
- Authors: Henry Xu, An Ju, David Wagner
- Abstract summary: Recent work on adversarial road patches have successfully induced perception of lane lines with arbitrary form.
We propose a modular lane verification system that can catch such threats before the autonomous driving system is misled.
Our experiments show that implementing the system with a simple convolutional neural network (CNN) can defend against a wide gamut of attacks on lane detection models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Susceptibility of neural networks to adversarial attack prompts serious
safety concerns for lane detection efforts, a domain where such models have
been widely applied. Recent work on adversarial road patches have successfully
induced perception of lane lines with arbitrary form, presenting an avenue for
rogue control of vehicle behavior. In this paper, we propose a modular lane
verification system that can catch such threats before the autonomous driving
system is misled while remaining agnostic to the particular lane detection
model. Our experiments show that implementing the system with a simple
convolutional neural network (CNN) can defend against a wide gamut of attacks
on lane detection models. With a 10% impact to inference time, we can detect
96% of bounded non-adaptive attacks, 90% of bounded adaptive attacks, and 98%
of patch attacks while preserving accurate identification at least 95% of true
lanes, indicating that our proposed verification system is effective at
mitigating lane detection security risks with minimal overhead.
Related papers
- LoRA as a Flexible Framework for Securing Large Vision Systems [1.9035583634286277]
Adversarial attacks have emerged as a critical threat to autonomous driving systems.<n>We propose to take insights for parameter efficient fine-tuning and use low-rank adaptation (LoRA) to train a lightweight security patch.<n>We demonstrate that our framework can patch a pre-trained model to improve classification accuracy by up to 78.01% in the presence of adversarial examples.
arXiv Detail & Related papers (2025-05-31T18:16:21Z) - CANTXSec: A Deterministic Intrusion Detection and Prevention System for CAN Bus Monitoring ECU Activations [53.036288487863786]
We propose CANTXSec, the first deterministic Intrusion Detection and Prevention system based on physical ECU activations.<n>It detects and prevents classical attacks in the CAN bus, while detecting advanced attacks that have been less investigated in the literature.<n>We prove the effectiveness of our solution on a physical testbed, where we achieve 100% detection accuracy in both classes of attacks while preventing 100% of FIAs.
arXiv Detail & Related papers (2025-05-14T13:37:07Z) - Discovering New Shadow Patterns for Black-Box Attacks on Lane Detection of Autonomous Vehicles [2.360491397983081]
We present a novel physical-world attack on autonomous vehicle lane detection systems.<n>Our method is entirely passive, power-free, and inconspicuous to human observers.<n>A user study confirms the attack's stealthiness, with 83.6% of participants failing to detect it during driving tasks.
arXiv Detail & Related papers (2024-09-26T19:43:52Z) - A Robust Multi-Stage Intrusion Detection System for In-Vehicle Network Security using Hierarchical Federated Learning [0.0]
In-vehicle intrusion detection systems (IDSs) must detect seen attacks and provide a robust defense against new, unseen attacks.
Previous work has relied solely on the CAN ID feature or has used traditional machine learning (ML) approaches with manual feature extraction.
This paper introduces a cutting-edge, novel, lightweight, in-vehicle, IDS-leveraging, deep learning (DL) algorithm to address these limitations.
arXiv Detail & Related papers (2024-08-15T21:51:56Z) - Detecting stealthy cyberattacks on adaptive cruise control vehicles: A
machine learning approach [5.036807309572884]
More insidious attacks, which only slightly alter driving behavior, can result in network-wide increases in congestion, fuel consumption, and even crash risk without being easily detected.
We present a traffic model framework for three types of potential cyberattacks: malicious manipulation of vehicle control commands, false data injection attacks on sensor measurements, and denial-of-service (DoS) attacks.
A novel generative adversarial network (GAN)-based anomaly detection model is proposed for real-time identification of such attacks using vehicle trajectory data.
arXiv Detail & Related papers (2023-10-26T01:22:10Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Infrastructure-based End-to-End Learning and Prevention of Driver
Failure [68.0478623315416]
FailureNet is a recurrent neural network trained end-to-end on trajectories of both nominal and reckless drivers in a scaled miniature city.
It can accurately identify control failures, upstream perception errors, and speeding drivers, distinguishing them from nominal driving.
Compared to speed or frequency-based predictors, FailureNet's recurrent neural network structure provides improved predictive power, yielding upwards of 84% accuracy when deployed on hardware.
arXiv Detail & Related papers (2023-03-21T22:55:51Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Early Detection of Network Attacks Using Deep Learning [0.0]
A network intrusion detection system (IDS) is a tool used for identifying unauthorized and malicious behavior by observing the network traffic.
We propose an end-to-end early intrusion detection system to prevent network attacks before they could cause any more damage to the system under attack.
arXiv Detail & Related papers (2022-01-27T16:35:37Z) - TANTRA: Timing-Based Adversarial Network Traffic Reshaping Attack [46.79557381882643]
We present TANTRA, a novel end-to-end Timing-based Adversarial Network Traffic Reshaping Attack.
Our evasion attack utilizes a long short-term memory (LSTM) deep neural network (DNN) which is trained to learn the time differences between the target network's benign packets.
TANTRA achieves an average success rate of 99.99% in network intrusion detection system evasion.
arXiv Detail & Related papers (2021-03-10T19:03:38Z) - Graph-Based Intrusion Detection System for Controller Area Networks [1.697297400355883]
The controller area network (CAN) is the most widely used intra-vehicular communication network in the automotive industry.
We propose a four-stage intrusion detection system that uses the chi-squared method and can detect any kind of strong and weak cyber attacks in a CAN.
Our experimental results show that we have a very low 5.26% misclassification for denial of service (DoS) attack, 10% misclassification for fuzzy attack, 4.76% misclassification for replay attack, and no misclassification for spoofing attack.
arXiv Detail & Related papers (2020-09-24T01:33:58Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.