Automated Driving Without Ethics: Meaning, Design and Real-World
Implementation
- URL: http://arxiv.org/abs/2308.04760v1
- Date: Wed, 9 Aug 2023 07:49:24 GMT
- Title: Automated Driving Without Ethics: Meaning, Design and Real-World
Implementation
- Authors: Katherine Evans (IRCAI), Nelson de Moura (ASTRA), Raja Chatila (ISIR),
St\'ephane Chauvier (SND)
- Abstract summary: The goal of this approach is not to define how moral theory requires vehicles to behave, but rather to provide a computational approach that is flexible enough to accommodate a number of human'moral positions' concerning what morality demands and what road users may expect, offering an evaluation tool for the social acceptability of an automated vehicle's decision making.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ethics of automated vehicles (AV) has received a great amount of
attention in recent years, specifically in regard to their decisional policies
in accident situations in which human harm is a likely consequence. After a
discussion about the pertinence and cogency of the term 'artificial moral
agent' to describe AVs that would accomplish these sorts of decisions, and
starting from the assumption that human harm is unavoidable in some situations,
a strategy for AV decision making is proposed using only pre-defined parameters
to characterize the risk of possible accidents and also integrating the Ethical
Valence Theory, which paints AV decision-making as a type of claim mitigation,
into multiple possible decision rules to determine the most suitable action
given the specific environment and decision context. The goal of this approach
is not to define how moral theory requires vehicles to behave, but rather to
provide a computational approach that is flexible enough to accommodate a
number of human 'moral positions' concerning what morality demands and what
road users may expect, offering an evaluation tool for the social acceptability
of an automated vehicle's decision making.
Related papers
- Conformal Decision Theory: Safe Autonomous Decisions from Imperfect Predictions [80.34972679938483]
We introduce Conformal Decision Theory, a framework for producing safe autonomous decisions despite imperfect machine learning predictions.
Decisions produced by our algorithms are safe in the sense that they come with provable statistical guarantees of having low risk.
Experiments demonstrate the utility of our approach in robot motion planning around humans, automated stock trading, and robot manufacturing.
arXiv Detail & Related papers (2023-10-09T17:59:30Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Safe Explicable Planning [3.3869539907606603]
We propose Safe Explicable Planning (SEP) to support the specification of a safety bound.
Our approach generalizes the consideration of multiple objectives stemming from multiple models.
We provide formal proofs that validate the desired theoretical properties of these methods.
arXiv Detail & Related papers (2023-04-04T21:49:02Z) - Intention-Aware Decision-Making for Mixed Intersection Scenarios [1.2891210250935146]
This paper presents a white-box intention-aware decision-making for the handling of interactions between a pedestrian and an automated vehicle.
A design framework has been developed, which enables automated parameterization of the decision-making.
arXiv Detail & Related papers (2023-03-29T13:23:51Z) - Predicting Autonomous Vehicle Collision Injury Severity Levels for
Ethical Decision Making and Path Planning [1.713291434132985]
Developments in autonomous vehicles (AVs) are rapidly advancing and will in the next 20 years become a central part of our society.
In the event of AV incidents, decisions will need to be made that require ethical decisions, e.g., deciding between colliding into a group of pedestrians or a rigid barrier.
For an AV to undertake such ethical decision making and path planning, simulation models of the situation will be required that are used in real-time on-board the AV.
These models will enable path planning and ethical decision making to be undertaken based on predetermined collision injury severity levels.
arXiv Detail & Related papers (2022-12-16T15:39:44Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Randomized Classifiers vs Human Decision-Makers: Trustworthy AI May Have
to Act Randomly and Society Seems to Accept This [0.8889304968879161]
We feel that akin to human decisions, judgments of artificial agents should necessarily be grounded in some moral principles.
Yet a decision-maker can only make truly ethical (based on any ethical theory) and fair (according to any notion of fairness) decisions if full information on all the relevant factors on which the decision is based are available at the time of decision-making.
arXiv Detail & Related papers (2021-11-15T05:39:02Z) - Generalizing Decision Making for Automated Driving with an Invariant
Environment Representation using Deep Reinforcement Learning [55.41644538483948]
Current approaches either do not generalize well beyond the training data or are not capable to consider a variable number of traffic participants.
We propose an invariant environment representation from the perspective of the ego vehicle.
We show that the agents are capable to generalize successfully to unseen scenarios, due to the abstraction.
arXiv Detail & Related papers (2021-02-12T20:37:29Z) - Machine Ethics and Automated Vehicles [0.0]
A fully-automated vehicle must continuously decide how to allocate this risk without a human driver's oversight.
I introduce the concept of moral behavior for an automated vehicle, argue the need for research in this area through responses to anticipated critiques, and discuss relevant applications from machine ethics and moral modeling research.
arXiv Detail & Related papers (2020-10-29T15:14:47Z) - Can Autonomous Vehicles Identify, Recover From, and Adapt to
Distribution Shifts? [104.04999499189402]
Out-of-training-distribution (OOD) scenarios are a common challenge of learning agents at deployment.
We propose an uncertainty-aware planning method, called emphrobust imitative planning (RIP)
Our method can detect and recover from some distribution shifts, reducing the overconfident and catastrophic extrapolations in OOD scenes.
We introduce an autonomous car novel-scene benchmark, textttCARNOVEL, to evaluate the robustness of driving agents to a suite of tasks with distribution shifts.
arXiv Detail & Related papers (2020-06-26T11:07:32Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.