Watch Out for the Safety-Threatening Actors: Proactively Mitigating
Safety Hazards
- URL: http://arxiv.org/abs/2206.00886v1
- Date: Thu, 2 Jun 2022 05:56:25 GMT
- Title: Watch Out for the Safety-Threatening Actors: Proactively Mitigating
Safety Hazards
- Authors: Saurabh Jha and Shengkun Cui and Zbigniew Kalbarczyk and Ravishankar
K. Iyer
- Abstract summary: We propose a safety threat indicator (STI) using counterfactual reasoning to estimate the importance of each actor on the road with respect to its influence on the AV's safety.
Our approach reduces the accident rate for the state-of-the-art AV agent(s) in rare hazardous scenarios by more than 70%.
- Score: 5.898210877584262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the successful demonstration of autonomous vehicles (AVs), such as
self-driving cars, ensuring AV safety remains a challenging task. Although some
actors influence an AV's driving decisions more than others, current approaches
pay equal attention to each actor on the road. An actor's influence on the AV's
decision can be characterized in terms of its ability to decrease the number of
safe navigational choices for the AV. In this work, we propose a safety threat
indicator (STI) using counterfactual reasoning to estimate the importance of
each actor on the road with respect to its influence on the AV's safety. We use
this indicator to (i) characterize the existing real-world datasets to identify
rare hazardous scenarios as well as the poor performance of existing
controllers in such scenarios; and (ii) design an RL based safety mitigation
controller to proactively mitigate the safety hazards those actors pose to the
AV. Our approach reduces the accident rate for the state-of-the-art AV agent(s)
in rare hazardous scenarios by more than 70%.
Related papers
- Automated and Complete Generation of Traffic Scenarios at Road Junctions Using a Multi-level Danger Definition [2.5608506499175094]
We propose an approach to derive a complete set of (potentially dangerous) abstract scenarios at any given road junction.
From these abstract scenarios, we derive exact paths that actors must follow to guide simulation-based testing.
Results show that the AV-under-test is involved in increasing percentages of unsafe behaviors in simulation.
arXiv Detail & Related papers (2024-10-09T17:23:51Z) - Nothing in Excess: Mitigating the Exaggerated Safety for LLMs via Safety-Conscious Activation Steering [56.92068213969036]
Safety alignment is indispensable for Large language models (LLMs) to defend threats from malicious instructions.
Recent researches reveal safety-aligned LLMs prone to reject benign queries due to the exaggerated safety issue.
We propose a Safety-Conscious Activation Steering (SCANS) method to mitigate the exaggerated safety concerns.
arXiv Detail & Related papers (2024-08-21T10:01:34Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models [53.701148276912406]
Vision-Large-Language-models (VLMs) have great application prospects in autonomous driving.
BadVLMDriver is the first backdoor attack against VLMs for autonomous driving that can be launched in practice using physical objects.
BadVLMDriver achieves a 92% attack success rate in inducing a sudden acceleration when coming across a pedestrian holding a red balloon.
arXiv Detail & Related papers (2024-04-19T14:40:38Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Safety Margins for Reinforcement Learning [53.10194953873209]
We show how to leverage proxy criticality metrics to generate safety margins.
We evaluate our approach on learned policies from APE-X and A3C within an Atari environment.
arXiv Detail & Related papers (2023-07-25T16:49:54Z) - Fast or Accurate? Governing Conflicting Goals in Highly Autonomous
Vehicles [3.3605894204326994]
We argue that understanding the fundamental engineering trade-off between accuracy and speed in AVs is critical for policymakers to regulate the uncertainty and risk inherent in AV systems.
This will shift the balance of power from manufacturers to the public by facilitating effective regulation, reducing barriers to tort recovery, and ensuring that public values like safety and accountability are appropriately balanced.
arXiv Detail & Related papers (2022-08-03T13:24:25Z) - Watch out for the risky actors: Assessing risk in dynamic environments
for safe driving [7.13056075998264]
The risk encountered by the actor Ego depends on the driving scenario and the uncertainty associated with predicting the future trajectories of the other actors in the driving scenario.
We propose a novel risk metric to calculate the importance of each actor in the world and demonstrate its usefulness through a case study.
arXiv Detail & Related papers (2021-10-19T14:10:26Z) - ML-driven Malware that Targets AV Safety [5.675697880379048]
We introduce an attack model, a method to deploy the attack in the form of smart malware, and an experimental evaluation of its impact on production-grade autonomous driving software.
We find that determining the time interval during which to launch the attack is critically important for causing safety hazards with a high degree of success.
For example, the smart malware caused 33X more forced emergency braking than random attacks did, and accidents in 52.6% of the driving simulations.
arXiv Detail & Related papers (2020-04-24T22:29:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.