Towards a Framework for Certification of Reliable Autonomous Systems
- URL: http://arxiv.org/abs/2001.09124v1
- Date: Fri, 24 Jan 2020 18:18:35 GMT
- Title: Towards a Framework for Certification of Reliable Autonomous Systems
- Authors: Michael Fisher, Viviana Mascardi, Kristin Yvonne Rozier, Bernd-Holger
Schlingloff, Michael Winikoff and Neil Yorke-Smith
- Abstract summary: A computational system is autonomous if it is able to make its own decisions, or take its own actions, without human supervision or control.
Regulators grapple with how to deal with autonomous systems, for example how could we certify an Unmanned Aerial System for autonomous use in civilian airspace?
We here analyse what is needed in order to provide verified reliable behaviour of an autonomous system.
We propose a roadmap towards developing regulatory guidelines, including articulating challenges to researchers, to engineers, and to regulators.
- Score: 3.3861948721202233
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A computational system is called autonomous if it is able to make its own
decisions, or take its own actions, without human supervision or control. The
capability and spread of such systems have reached the point where they are
beginning to touch much of everyday life. However, regulators grapple with how
to deal with autonomous systems, for example how could we certify an Unmanned
Aerial System for autonomous use in civilian airspace? We here analyse what is
needed in order to provide verified reliable behaviour of an autonomous system,
analyse what can be done as the state-of-the-art in automated verification, and
propose a roadmap towards developing regulatory guidelines, including
articulating challenges to researchers, to engineers, and to regulators. Case
studies in seven distinct domains illustrate the article.
Related papers
- A Measure for Level of Autonomy Based on Observable System Behavior [0.0]
We present a potential measure for predicting level of autonomy using observable actions.
We also present an algorithm incorporating the proposed measure.
The measure and algorithm have significance to researchers and practitioners interested in a method to blind compare autonomous systems at runtime.
arXiv Detail & Related papers (2024-07-20T20:34:20Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - A Roadmap for Simulation-Based Testing of Autonomous Cyber-Physical Systems: Challenges and Future Direction [5.742965094549775]
This paper pioneers a strategic roadmap for simulation-based testing of autonomous systems.
Our paper discusses the relevant challenges and obstacles of ACPSs, focusing on test automation and quality assurance.
arXiv Detail & Related papers (2024-05-02T07:42:33Z) - A Quantitative Autonomy Quantification Framework for Fully Autonomous Robotic Systems [0.0]
This paper focuses on the full autonomous mode and proposes a quantitative autonomy assessment framework based on task requirements.
The framework provides not only a tool for quantifying autonomy, but also a regulatory interface and common language for autonomous systems developers and users.
arXiv Detail & Related papers (2023-11-03T14:26:53Z) - LLM4Drive: A Survey of Large Language Models for Autonomous Driving [62.10344445241105]
Large language models (LLMs) have demonstrated abilities including understanding context, logical reasoning, and generating answers.
In this paper, we systematically review a research line about textitLarge Language Models for Autonomous Driving (LLM4AD).
arXiv Detail & Related papers (2023-11-02T07:23:33Z) - Assurance for Autonomy -- JPL's past research, lessons learned, and
future directions [56.32768279109502]
Autonomy is required when a wide variation in circumstances precludes responses being pre-planned.
Mission assurance is a key contributor to providing confidence, yet assurance practices honed over decades of spaceflight have relatively little experience with autonomy.
Researchers in JPL's software assurance group have been involved in the development of techniques specific to the assurance of autonomy.
arXiv Detail & Related papers (2023-05-16T18:24:12Z) - Rethinking Certification for Higher Trust and Ethical Safeguarding of
Autonomous Systems [6.24907186790431]
We discuss the motivation for the need to modify the current certification processes for autonomous driving systems.
We identify a number of issues with the proposed certification strategies, which may impact the systems substantially.
arXiv Detail & Related papers (2023-03-16T15:19:25Z) - Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone
Racing [52.50284630866713]
Existing systems often require hand-engineered components for state estimation, planning, and control.
This paper tackles the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies.
arXiv Detail & Related papers (2022-10-26T19:03:17Z) - AdvSim: Generating Safety-Critical Scenarios for Self-Driving Vehicles [76.46575807165729]
We propose AdvSim, an adversarial framework to generate safety-critical scenarios for any LiDAR-based autonomy system.
By simulating directly from sensor data, we obtain adversarial scenarios that are safety-critical for the full autonomy stack.
arXiv Detail & Related papers (2021-01-16T23:23:12Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - Regulating Safety and Security in Autonomous Robotic Systems [0.0]
Rules for autonomous systems are often difficult to formalise.
In the space and nuclear sectors applications are more likely to differ, so a set of general safety principles has developed.
This allows novel applications to be assessed for their safety, but are difficult to formalise.
We are collaborating with regulators and the community in the space and nuclear sectors to develop guidelines for autonomous and robotic systems.
arXiv Detail & Related papers (2020-07-09T16:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.