A Measure for Level of Autonomy Based on Observable System Behavior
- URL: http://arxiv.org/abs/2407.14975v1
- Date: Sat, 20 Jul 2024 20:34:20 GMT
- Title: A Measure for Level of Autonomy Based on Observable System Behavior
- Authors: Jason M. Pittman,
- Abstract summary: We present a potential measure for predicting level of autonomy using observable actions.
We also present an algorithm incorporating the proposed measure.
The measure and algorithm have significance to researchers and practitioners interested in a method to blind compare autonomous systems at runtime.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contemporary artificial intelligence systems are pivotal in enhancing human efficiency and safety across various domains. One such domain is autonomous systems, especially in automotive and defense use cases. Artificial intelligence brings learning and enhanced decision-making to autonomy system goal-oriented behaviors and human independence. However, the lack of clear understanding of autonomy system capabilities hampers human-machine or machine-machine interaction and interdiction. This necessitates varying degrees of human involvement for safety, accountability, and explainability purposes. Yet, measuring the level autonomous capability in an autonomous system presents a challenge. Two scales of measurement exist, yet measuring autonomy presupposes a variety of elements not available in the wild. This is why existing measures for level of autonomy are operationalized only during design or test and evaluation phases. No measure for level of autonomy based on observed system behavior exists at this time. To address this, we outline a potential measure for predicting level of autonomy using observable actions. We also present an algorithm incorporating the proposed measure. The measure and algorithm have significance to researchers and practitioners interested in a method to blind compare autonomous systems at runtime. Defense-based implementations are likewise possible because counter-autonomy depends on robust identification of autonomous systems.
Related papers
- Exploring the Causality of End-to-End Autonomous Driving [57.631400236930375]
We propose a comprehensive approach to explore and analyze the causality of end-to-end autonomous driving.
Our work is the first to unveil the mystery of end-to-end autonomous driving and turn the black box into a white one.
arXiv Detail & Related papers (2024-07-09T04:56:11Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - A Quantitative Autonomy Quantification Framework for Fully Autonomous Robotic Systems [0.0]
This paper focuses on the full autonomous mode and proposes a quantitative autonomy assessment framework based on task requirements.
The framework provides not only a tool for quantifying autonomy, but also a regulatory interface and common language for autonomous systems developers and users.
arXiv Detail & Related papers (2023-11-03T14:26:53Z) - LLM4Drive: A Survey of Large Language Models for Autonomous Driving [62.10344445241105]
Large language models (LLMs) have demonstrated abilities including understanding context, logical reasoning, and generating answers.
In this paper, we systematically review a research line about textitLarge Language Models for Autonomous Driving (LLM4AD).
arXiv Detail & Related papers (2023-11-02T07:23:33Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone
Racing [52.50284630866713]
Existing systems often require hand-engineered components for state estimation, planning, and control.
This paper tackles the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies.
arXiv Detail & Related papers (2022-10-26T19:03:17Z) - Adaptive Autonomy in Human-on-the-Loop Vision-Based Robotics Systems [16.609594839630883]
Computer vision approaches are widely used by autonomous robotic systems to guide their decision making.
High accuracy is critical, particularly for Human-on-the-loop (HoTL) systems where humans play only a supervisory role.
We propose a solution based upon adaptive autonomy levels, whereby the system detects loss of reliability of these models.
arXiv Detail & Related papers (2021-03-28T05:43:10Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - Learning to Optimize Autonomy in Competence-Aware Systems [32.3596917475882]
We propose an introspective model of autonomy that is learned and updated online through experience.
We define a competence-aware system (CAS) that explicitly models its own proficiency at different levels of autonomy and the available human feedback.
We analyze the convergence properties of CAS and provide experimental results for robot delivery and autonomous driving domains.
arXiv Detail & Related papers (2020-03-17T14:31:45Z) - Modeling Perception Errors towards Robust Decision Making in Autonomous
Vehicles [11.503090828741191]
We propose a simulation-based methodology towards answering the question: is a perception subsystem sufficient for the decision making subsystem to make robust, safe decisions?
We show how to analyze the impact of different kinds of sensing and perception errors on the behavior of the autonomous system.
arXiv Detail & Related papers (2020-01-31T08:02:14Z) - Towards a Framework for Certification of Reliable Autonomous Systems [3.3861948721202233]
A computational system is autonomous if it is able to make its own decisions, or take its own actions, without human supervision or control.
Regulators grapple with how to deal with autonomous systems, for example how could we certify an Unmanned Aerial System for autonomous use in civilian airspace?
We here analyse what is needed in order to provide verified reliable behaviour of an autonomous system.
We propose a roadmap towards developing regulatory guidelines, including articulating challenges to researchers, to engineers, and to regulators.
arXiv Detail & Related papers (2020-01-24T18:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.