Towards Safer Self-Driving Through Great PAIN (Physically Adversarial
Intelligent Networks)
- URL: http://arxiv.org/abs/2003.10662v1
- Date: Tue, 24 Mar 2020 05:04:13 GMT
- Title: Towards Safer Self-Driving Through Great PAIN (Physically Adversarial
Intelligent Networks)
- Authors: Piyush Gupta, Demetris Coleman, Joshua E. Siegel
- Abstract summary: We introduce a "Physically Adrial Intelligent Network" (PAIN) wherein self-driving vehicles interact aggressively.
We train two agents, a protagonist and an adversary, using dueling double deep Q networks (DDDQNs) with prioritized experience replay.
The trained protagonist becomes more resilient to environmental uncertainty and less prone to corner case failures.
- Score: 3.136861161060885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated vehicles' neural networks suffer from overfit, poor
generalizability, and untrained edge cases due to limited data availability.
Researchers synthesize randomized edge-case scenarios to assist in the training
process, though simulation introduces potential for overfit to latent rules and
features. Automating worst-case scenario generation could yield informative
data for improving self driving. To this end, we introduce a "Physically
Adversarial Intelligent Network" (PAIN), wherein self-driving vehicles interact
aggressively in the CARLA simulation environment. We train two agents, a
protagonist and an adversary, using dueling double deep Q networks (DDDQNs)
with prioritized experience replay. The coupled networks alternately
seek-to-collide and to avoid collisions such that the "defensive" avoidance
algorithm increases the mean-time-to-failure and distance traveled under
non-hostile operating conditions. The trained protagonist becomes more
resilient to environmental uncertainty and less prone to corner case failures
resulting in collisions than the agent trained without an adversary.
Related papers
- Mitigating Covariate Shift in Imitation Learning for Autonomous Vehicles Using Latent Space Generative World Models [60.87795376541144]
A world model is a neural network capable of predicting an agent's next state given past states and actions.
During end-to-end training, our policy learns how to recover from errors by aligning with states observed in human demonstrations.
We present qualitative and quantitative results, demonstrating significant improvements upon prior state of the art in closed-loop testing.
arXiv Detail & Related papers (2024-09-25T06:48:25Z) - Passenger hazard perception based on EEG signals for highly automated driving vehicles [23.322910031715583]
This study explores neural mechanisms in passenger-vehicle interactions, leading to the development of a Passenger Cognitive Model (PCM) and the Passenger EEG Decoding Strategy (PEDS)
Central to PEDS is a novel Convolutional Recurrent Neural Network (CRNN) that captures spatial and temporal EEG data patterns.
Our findings highlight the predictive power of pre-event EEG data, enhancing the detection of hazardous scenarios and offering a network-driven framework for safer autonomous vehicles.
arXiv Detail & Related papers (2024-08-29T07:32:30Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Infrastructure-based End-to-End Learning and Prevention of Driver
Failure [68.0478623315416]
FailureNet is a recurrent neural network trained end-to-end on trajectories of both nominal and reckless drivers in a scaled miniature city.
It can accurately identify control failures, upstream perception errors, and speeding drivers, distinguishing them from nominal driving.
Compared to speed or frequency-based predictors, FailureNet's recurrent neural network structure provides improved predictive power, yielding upwards of 84% accuracy when deployed on hardware.
arXiv Detail & Related papers (2023-03-21T22:55:51Z) - Adversarial training with informed data selection [53.19381941131439]
Adrial training is the most efficient solution to defend the network against these malicious attacks.
This work proposes a data selection strategy to be applied in the mini-batch training.
The simulation results show that a good compromise can be obtained regarding robustness and standard accuracy.
arXiv Detail & Related papers (2023-01-07T12:09:50Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - A-Eye: Driving with the Eyes of AI for Corner Case Generation [0.6445605125467573]
The overall goal of this work is to enrich training data for automated driving with so called corner cases.
We present the design of a test rig to generate synthetic corner cases using a human-in-the-loop approach.
arXiv Detail & Related papers (2022-02-22T10:42:23Z) - Bait and Switch: Online Training Data Poisoning of Autonomous Driving
Systems [12.867129768057175]
We show that by controlling parts of a physical environment in which a pre-trained deep neural network (DNN) is being fine-tuned online, an adversary can launch subtle data poisoning attacks.
arXiv Detail & Related papers (2020-11-08T20:04:43Z) - Finding Physical Adversarial Examples for Autonomous Driving with Fast
and Differentiable Image Compositing [33.466413757630846]
We propose a scalable approach for finding adversarial modifications of a simulated autonomous driving environment.
Our approach is significantly more scalable and far more effective than a state-of-the-art approach based on Bayesian Optimization.
arXiv Detail & Related papers (2020-10-17T18:35:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.