Adversary ML Resilience in Autonomous Driving Through Human Centered
Perception Mechanisms
- URL: http://arxiv.org/abs/2311.01478v1
- Date: Thu, 2 Nov 2023 04:11:45 GMT
- Title: Adversary ML Resilience in Autonomous Driving Through Human Centered
Perception Mechanisms
- Authors: Aakriti Shah
- Abstract summary: This paper explores the resilience of autonomous driving systems against three main physical adversarial attacks (tape, graffiti, illumination)
To build robustness against attacks, defense techniques like adversarial training and transfer learning were implemented.
Results demonstrated transfer learning models played a crucial role in performance by allowing knowledge gained from shape training to improve generalizability of road sign classification.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Physical adversarial attacks on road signs are continuously exploiting
vulnerabilities in modern day autonomous vehicles (AVs) and impeding their
ability to correctly classify what type of road sign they encounter. Current
models cannot generalize input data well, resulting in overfitting or
underfitting. In overfitting, the model memorizes the input data but cannot
generalize to new scenarios. In underfitting, the model does not learn enough
of the input data to accurately classify these road signs. This paper explores
the resilience of autonomous driving systems against three main physical
adversarial attacks (tape, graffiti, illumination), specifically targeting
object classifiers. Several machine learning models were developed and
evaluated on two distinct datasets: road signs (stop signs, speed limit signs,
traffic lights, and pedestrian crosswalk signs) and geometric shapes (octagons,
circles, squares, and triangles). The study compared algorithm performance
under different conditions, including clean and adversarial training and
testing on these datasets. To build robustness against attacks, defense
techniques like adversarial training and transfer learning were implemented.
Results demonstrated transfer learning models played a crucial role in
performance by allowing knowledge gained from shape training to improve
generalizability of road sign classification, despite the datasets being
completely different. The paper suggests future research directions, including
human-in-the-loop validation, security analysis, real-world testing, and
explainable AI for transparency. This study aims to contribute to improving
security and robustness of object classifiers in autonomous vehicles and
mitigating adversarial example impacts on driving systems.
Related papers
- Mitigating Covariate Shift in Imitation Learning for Autonomous Vehicles Using Latent Space Generative World Models [60.87795376541144]
A world model is a neural network capable of predicting an agent's next state given past states and actions.
During end-to-end training, our policy learns how to recover from errors by aligning with states observed in human demonstrations.
We present qualitative and quantitative results, demonstrating significant improvements upon prior state of the art in closed-loop testing.
arXiv Detail & Related papers (2024-09-25T06:48:25Z) - Evaluating the Robustness of Off-Road Autonomous Driving Segmentation
against Adversarial Attacks: A Dataset-Centric analysis [1.6538732383658392]
This study investigates the vulnerability of semantic segmentation models to adversarial input perturbations.
We compare the effects of adversarial attacks on different segmentation network architectures.
This work contributes to the safe navigation of autonomous robot Unimog U5023 in rough off-road unstructured environments.
arXiv Detail & Related papers (2024-02-03T13:48:57Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Robustness Benchmark of Road User Trajectory Prediction Models for
Automated Driving [0.0]
We benchmark machine learning models against perturbations that simulate functional insufficiencies observed during model deployment in a vehicle.
Training the models with similar perturbations effectively reduces performance degradation, with error increases of up to +87.5%.
We argue that despite being an effective mitigation strategy, data augmentation through perturbations during training does not guarantee robustness towards unforeseen perturbations.
arXiv Detail & Related papers (2023-04-04T15:47:42Z) - Certified Interpretability Robustness for Class Activation Mapping [77.58769591550225]
We present CORGI, short for Certifiably prOvable Robustness Guarantees for Interpretability mapping.
CORGI is an algorithm that takes in an input image and gives a certifiable lower bound for the robustness of its CAM interpretability map.
We show the effectiveness of CORGI via a case study on traffic sign data, certifying lower bounds on the minimum adversarial perturbation.
arXiv Detail & Related papers (2023-01-26T18:58:11Z) - Evaluating Adversarial Attacks on Driving Safety in Vision-Based
Autonomous Vehicles [21.894836150974093]
In recent years, many deep learning models have been adopted in autonomous driving.
Recent studies have demonstrated that adversarial attacks can cause a significant decline in detection precision of deep learning-based 3D object detection models.
arXiv Detail & Related papers (2021-08-06T04:52:09Z) - An Empirical Review of Adversarial Defenses [0.913755431537592]
Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
arXiv Detail & Related papers (2020-12-10T09:34:41Z) - Detecting 32 Pedestrian Attributes for Autonomous Vehicles [103.87351701138554]
In this paper, we address the problem of jointly detecting pedestrians and recognizing 32 pedestrian attributes.
We introduce a Multi-Task Learning (MTL) model relying on a composite field framework, which achieves both goals in an efficient way.
We show competitive detection and attribute recognition results, as well as a more stable MTL training.
arXiv Detail & Related papers (2020-12-04T15:10:12Z) - Targeted Physical-World Attention Attack on Deep Learning Models in Road
Sign Recognition [79.50450766097686]
This paper proposes the targeted attention attack (TAA) method for real world road sign attack.
Experimental results validate that the TAA method improves the attack successful rate (nearly 10%) and reduces the perturbation loss (about a quarter) compared with the popular RP2 method.
arXiv Detail & Related papers (2020-10-09T02:31:34Z) - Learning predictive representations in autonomous driving to improve
deep reinforcement learning [9.919972770800822]
Reinforcement learning using a novel predictive representation is applied to autonomous driving.
The novel predictive representation is learned by general value functions (GVFs) to provide out-of-policy, or counter-factual, predictions of future lane centeredness and road angle.
Experiments in both simulation and the real-world demonstrate that predictive representations in reinforcement learning improve learning efficiency, smoothness of control and generalization to roads that the agent was never shown during training.
arXiv Detail & Related papers (2020-06-26T17:17:47Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.