Fast or Accurate? Governing Conflicting Goals in Highly Autonomous
Vehicles
- URL: http://arxiv.org/abs/2208.02056v1
- Date: Wed, 3 Aug 2022 13:24:25 GMT
- Title: Fast or Accurate? Governing Conflicting Goals in Highly Autonomous
Vehicles
- Authors: A. Feder Cooper and Karen Levy
- Abstract summary: We argue that understanding the fundamental engineering trade-off between accuracy and speed in AVs is critical for policymakers to regulate the uncertainty and risk inherent in AV systems.
This will shift the balance of power from manufacturers to the public by facilitating effective regulation, reducing barriers to tort recovery, and ensuring that public values like safety and accountability are appropriately balanced.
- Score: 3.3605894204326994
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The tremendous excitement around the deployment of autonomous vehicles (AVs)
comes from their purported promise. In addition to decreasing accidents, AVs
are projected to usher in a new era of equity in human autonomy by providing
affordable, accessible, and widespread mobility for disabled, elderly, and
low-income populations. However, to realize this promise, it is necessary to
ensure that AVs are safe for deployment, and to contend with the risks AV
technology poses, which threaten to eclipse its benefits. In this Article, we
focus on an aspect of AV engineering currently unexamined in the legal
literature, but with critical implications for safety, accountability,
liability, and power. Specifically, we explain how understanding the
fundamental engineering trade-off between accuracy and speed in AVs is critical
for policymakers to regulate the uncertainty and risk inherent in AV systems.
We discuss how understanding the trade-off will help create tools that will
enable policymakers to assess how the trade-off is being implemented. Such
tools will facilitate opportunities for developing concrete, ex ante AV safety
standards and conclusive mechanisms for ex post determination of accountability
after accidents occur. This will shift the balance of power from manufacturers
to the public by facilitating effective regulation, reducing barriers to tort
recovery, and ensuring that public values like safety and accountability are
appropriately balanced.
Related papers
- Auction-Based Regulation for Artificial Intelligence [28.86995747151915]
We propose an auction-based regulatory mechanism to regulate AI safety.
We provably guarantee that each participating agent's best strategy is to submit a model safer than a prescribed minimum-safety threshold.
Empirical results show that our regulatory auction boosts safety and participation rates by 20% and 15% respectively.
arXiv Detail & Related papers (2024-10-02T17:57:02Z) - Liability and Insurance for Catastrophic Losses: the Nuclear Power Precedent and Lessons for AI [0.0]
This paper argues that developers of frontier AI models should be assigned limited, strict, and exclusive third party liability for harms resulting from Critical AI Occurrences (CAIOs)
Mandatory insurance for CAIO liability is recommended to overcome developers' judgment-proofness, winner's curse dynamics, and leverage insurers' quasi-regulatory abilities.
arXiv Detail & Related papers (2024-09-10T17:41:31Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models [53.701148276912406]
Vision-Large-Language-models (VLMs) have great application prospects in autonomous driving.
BadVLMDriver is the first backdoor attack against VLMs for autonomous driving that can be launched in practice using physical objects.
BadVLMDriver achieves a 92% attack success rate in inducing a sudden acceleration when coming across a pedestrian holding a red balloon.
arXiv Detail & Related papers (2024-04-19T14:40:38Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Autonomous Vehicles for All? [4.67081468243647]
We argue that academic institutions, industry, and government agencies overseeing Autonomous Vehicles (AVs) must act proactively to ensure that AVs serve all.
AVs have considerable potential to increase the carrying capacity of roads, ameliorate the chore of driving, improve safety, provide mobility for those who cannot drive, and help the environment.
However, they also raise concerns over whether they are socially responsible, accounting for issues such as fairness, equity, and transparency.
arXiv Detail & Related papers (2023-07-03T19:33:07Z) - Watch Out for the Safety-Threatening Actors: Proactively Mitigating
Safety Hazards [5.898210877584262]
We propose a safety threat indicator (STI) using counterfactual reasoning to estimate the importance of each actor on the road with respect to its influence on the AV's safety.
Our approach reduces the accident rate for the state-of-the-art AV agent(s) in rare hazardous scenarios by more than 70%.
arXiv Detail & Related papers (2022-06-02T05:56:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.