Evaluating Automated Driving Planner Robustness against Adversarial
Influence
- URL: http://arxiv.org/abs/2205.14697v1
- Date: Sun, 29 May 2022 15:39:26 GMT
- Title: Evaluating Automated Driving Planner Robustness against Adversarial
Influence
- Authors: Andres Molina-Markham, Silvia G. Ionescu, Erin Lanus, Derek Ng, Sam
Sommerer, Joseph J. Rushanan
- Abstract summary: This paper aims to help researchers assess the robustness of protections for machine learning-enabled planners against adversarial influence.
We argue that adversarial evaluation fundamentally requires a process that seeks to defeat a specific protection.
This type of inference requires precise statements about threats, protections, and aspects of planning decisions to be guarded.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Evaluating the robustness of automated driving planners is a critical and
challenging task. Although methodologies to evaluate vehicles are well
established, they do not yet account for a reality in which vehicles with
autonomous components share the road with adversarial agents. Our approach,
based on probabilistic trust models, aims to help researchers assess the
robustness of protections for machine learning-enabled planners against
adversarial influence. In contrast with established practices that evaluate
safety using the same evaluation dataset for all vehicles, we argue that
adversarial evaluation fundamentally requires a process that seeks to defeat a
specific protection. Hence, we propose that evaluations be based on estimating
the difficulty for an adversary to determine conditions that effectively induce
unsafe behavior. This type of inference requires precise statements about
threats, protections, and aspects of planning decisions to be guarded. We
demonstrate our approach by evaluating protections for planners relying on
camera-based object detectors.
Related papers
- Criticality and Safety Margins for Reinforcement Learning [53.10194953873209]
We seek to define a criticality framework with both a quantifiable ground truth and a clear significance to users.
We introduce true criticality as the expected drop in reward when an agent deviates from its policy for n consecutive random actions.
We also introduce the concept of proxy criticality, a low-overhead metric that has a statistically monotonic relationship to true criticality.
arXiv Detail & Related papers (2024-09-26T21:00:45Z) - A Real-time Evaluation Framework for Pedestrian's Potential Risk at Non-Signalized Intersections Based on Predicted Post-Encroachment Time [1.0124625066746595]
In this study, a framework with computer vision technologies and predictive models is developed to evaluate the potential risk of pedestrians in real time.
Initiative is the Predicted Post-Encroachment Time (P-PET), derived from deep learning models capable to predict the arrival time of pedestrians and vehicles at intersections.
arXiv Detail & Related papers (2024-04-24T04:10:05Z) - Interaction-Aware Decision-Making for Autonomous Vehicles in Forced
Merging Scenario Leveraging Social Psychology Factors [7.812717451846781]
We consider a behavioral model that incorporates both social behaviors and personal objectives of the interacting drivers.
We develop a receding-horizon control-based decision-making strategy that estimates online the other drivers' intentions.
arXiv Detail & Related papers (2023-09-25T19:49:14Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - From Adversarial Arms Race to Model-centric Evaluation: Motivating a
Unified Automatic Robustness Evaluation Framework [91.94389491920309]
Textual adversarial attacks can discover models' weaknesses by adding semantic-preserved but misleading perturbations to the inputs.
The existing practice of robustness evaluation may exhibit issues of incomprehensive evaluation, impractical evaluation protocol, and invalid adversarial samples.
We set up a unified automatic robustness evaluation framework, shifting towards model-centric evaluation to exploit the advantages of adversarial attacks.
arXiv Detail & Related papers (2023-05-29T14:55:20Z) - AdvDO: Realistic Adversarial Attacks for Trajectory Prediction [87.96767885419423]
Trajectory prediction is essential for autonomous vehicles to plan correct and safe driving behaviors.
We devise an optimization-based adversarial attack framework to generate realistic adversarial trajectories.
Our attack can lead an AV to drive off road or collide into other vehicles in simulation.
arXiv Detail & Related papers (2022-09-19T03:34:59Z) - Contextual Intelligent Decisions: Expert Moderation of Machine Outputs
for Fair Assessment of Commercial Driving [2.7323386266136125]
We introduce a methodology towards a fairer automatic road safety assessment of drivers' behaviours.
The contextual moderation embedded within the intelligent decision-making process is underpinned by expert input.
We develop an interval-valued response-format questionnaire to capture the uncertainty of the influence of factors and variance amongst experts' views.
arXiv Detail & Related papers (2022-02-20T13:48:41Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.