Analyzing Intentional Behavior in Autonomous Agents under Uncertainty
- URL: http://arxiv.org/abs/2307.01532v1
- Date: Tue, 4 Jul 2023 07:36:11 GMT
- Title: Analyzing Intentional Behavior in Autonomous Agents under Uncertainty
- Authors: Filip Cano C\'ordoba, Samuel Judson, Timos Antonopoulos, Katrine
Bj{\o}rner, Nicholas Shoemaker, Scott J. Shapiro, Ruzica Piskac and Bettina
K\"onighofer
- Abstract summary: Principled accountability for autonomous decision-making in uncertain environments requires distinguishing intentional outcomes from negligent designs from actual accidents.
We propose analyzing the behavior of autonomous agents through a quantitative measure of the evidence of intentional behavior.
In a case study, we show how our method can distinguish between 'intentional' and 'accidental' traffic collisions.
- Score: 3.0099979365586265
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Principled accountability for autonomous decision-making in uncertain
environments requires distinguishing intentional outcomes from negligent
designs from actual accidents. We propose analyzing the behavior of autonomous
agents through a quantitative measure of the evidence of intentional behavior.
We model an uncertain environment as a Markov Decision Process (MDP). For a
given scenario, we rely on probabilistic model checking to compute the ability
of the agent to influence reaching a certain event. We call this the scope of
agency. We say that there is evidence of intentional behavior if the scope of
agency is high and the decisions of the agent are close to being optimal for
reaching the event. Our method applies counterfactual reasoning to
automatically generate relevant scenarios that can be analyzed to increase the
confidence of our assessment. In a case study, we show how our method can
distinguish between 'intentional' and 'accidental' traffic collisions.
Related papers
- Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Criticality and Safety Margins for Reinforcement Learning [53.10194953873209]
We seek to define a criticality framework with both a quantifiable ground truth and a clear significance to users.
We introduce true criticality as the expected drop in reward when an agent deviates from its policy for n consecutive random actions.
We also introduce the concept of proxy criticality, a low-overhead metric that has a statistically monotonic relationship to true criticality.
arXiv Detail & Related papers (2024-09-26T21:00:45Z) - Safety Margins for Reinforcement Learning [53.10194953873209]
We show how to leverage proxy criticality metrics to generate safety margins.
We evaluate our approach on learned policies from APE-X and A3C within an Atari environment.
arXiv Detail & Related papers (2023-07-25T16:49:54Z) - Reinforcement Learning with a Terminator [80.34572413850186]
We learn the parameters of the TerMDP and leverage the structure of the estimation problem to provide state-wise confidence bounds.
We use these to construct a provably-efficient algorithm, which accounts for termination, and bound its regret.
arXiv Detail & Related papers (2022-05-30T18:40:28Z) - Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards
Individualized and Explainable Robotic Support in Everyday Activities [80.37857025201036]
Key challenge for robotic systems is to figure out the behavior of another agent.
Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally.
We propose equipping robots with the necessary tools to conduct observational studies on people.
arXiv Detail & Related papers (2022-01-27T22:15:56Z) - Deceptive Decision-Making Under Uncertainty [25.197098169762356]
We study the design of autonomous agents that are capable of deceiving outside observers about their intentions while carrying out tasks.
By modeling the agent's behavior as a Markov decision process, we consider a setting where the agent aims to reach one of multiple potential goals.
We propose a novel approach to model observer predictions based on the principle of maximum entropy and to efficiently generate deceptive strategies.
arXiv Detail & Related papers (2021-09-14T14:56:23Z) - Ensemble Quantile Networks: Uncertainty-Aware Reinforcement Learning
with Applications in Autonomous Driving [1.6758573326215689]
Reinforcement learning can be used to create a decision-making agent for autonomous driving.
Previous approaches provide only black-box solutions, which do not offer information on how confident the agent is about its decisions.
This paper introduces the Ensemble Quantile Networks (EQN) method, which combines distributional RL with an ensemble approach to obtain a complete uncertainty estimate.
arXiv Detail & Related papers (2021-05-21T10:36:16Z) - Estimating Generalization under Distribution Shifts via Domain-Invariant
Representations [75.74928159249225]
We use a set of domain-invariant predictors as a proxy for the unknown, true target labels.
The error of the resulting risk estimate depends on the target risk of the proxy model.
arXiv Detail & Related papers (2020-07-06T17:21:24Z) - Identifying Causal-Effect Inference Failure with Uncertainty-Aware
Models [41.53326337725239]
We introduce a practical approach for integrating uncertainty estimation into a class of state-of-the-art neural network methods.
We show that our methods enable us to deal gracefully with situations of "no-overlap", common in high-dimensional data.
We show that correctly modeling uncertainty can keep us from giving overconfident and potentially harmful recommendations.
arXiv Detail & Related papers (2020-07-01T00:37:41Z) - Tactical Decision-Making in Autonomous Driving by Reinforcement Learning
with Uncertainty Estimation [0.9883261192383611]
Reinforcement learning can be used to create a tactical decision-making agent for autonomous driving.
This paper investigates how a Bayesian RL technique can be used to estimate the uncertainty of decisions in autonomous driving.
arXiv Detail & Related papers (2020-04-22T08:22:28Z) - Causal Strategic Linear Regression [5.672132510411465]
In many predictive decision-making scenarios, such as credit scoring and academic testing, a decision-maker must construct a model that accounts for agents' propensity to "game" the decision rule.
We join concurrent work in modeling agents' outcomes as a function of their changeable attributes.
We provide efficient algorithms for learning decision rules that optimize three distinct decision-maker objectives.
arXiv Detail & Related papers (2020-02-24T03:57:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.