Regulating Safety and Security in Autonomous Robotic Systems
- URL: http://arxiv.org/abs/2007.08006v1
- Date: Thu, 9 Jul 2020 16:33:14 GMT
- Title: Regulating Safety and Security in Autonomous Robotic Systems
- Authors: Matt Luckcuck and Marie Farrell
- Abstract summary: Rules for autonomous systems are often difficult to formalise.
In the space and nuclear sectors applications are more likely to differ, so a set of general safety principles has developed.
This allows novel applications to be assessed for their safety, but are difficult to formalise.
We are collaborating with regulators and the community in the space and nuclear sectors to develop guidelines for autonomous and robotic systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous Robotics Systems are inherently safety-critical and have complex
safety issues to consider (for example, a safety failure can lead to a safety
failure). Before they are deployed, these systems of have to show evidence that
they adhere to a set of regulator-defined rules for safety and security. Formal
methods provide robust approaches to proving a system obeys given rules, but
formalising (usually natural language) rules can prove difficult. Regulations
specifically for autonomous systems are still being developed, but the safety
rules for a human operator are a good starting point when trying to show that
an autonomous system is safe. For applications of autonomous systems like
driverless cars and pilotless aircraft, there are clear rules for human
operators, which have been formalised and used to prove that an autonomous
system obeys some or all of these rules. However, in the space and nuclear
sectors applications are more likely to differ, so a set of general safety
principles has developed. This allows novel applications to be assessed for
their safety, but are difficult to formalise. To improve this situation, we are
collaborating with regulators and the community in the space and nuclear
sectors to develop guidelines for autonomous and robotic systems that are
amenable to robust (formal) verification. These activities also have the
benefit of bridging the gaps in knowledge within both the space or nuclear
communities and academia.
Related papers
- Defining and Evaluating Physical Safety for Large Language Models [62.4971588282174]
Large Language Models (LLMs) are increasingly used to control robotic systems such as drones.
Their risks of causing physical threats and harm in real-world applications remain unexplored.
We classify the physical safety risks of drones into four categories: (1) human-targeted threats, (2) object-targeted threats, (3) infrastructure attacks, and (4) regulatory violations.
arXiv Detail & Related papers (2024-11-04T17:41:25Z) - Safety cases for frontier AI [0.8987776881291144]
Safety cases are reports that make a structured argument, supported by evidence, that a system is safe enough in a given operational context.
Safety cases are already common in other safety-critical industries such as aviation and nuclear power.
We explain why they may also be a useful tool in frontier AI governance, both in industry self-regulation and government regulation.
arXiv Detail & Related papers (2024-10-28T22:08:28Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Redefining Safety for Autonomous Vehicles [0.9208007322096532]
Existing definitions and associated conceptual frameworks for computer-based system safety should be revisited.
Operation without a human driver dramatically increases the scope of safety concerns.
We propose updated definitions for core system safety concepts.
arXiv Detail & Related papers (2024-04-25T17:22:43Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - No Trust without regulation! [0.0]
The explosion in performance of Machine Learning (ML) and the potential of its applications are encouraging us to consider its use in industrial systems.
It is still leaving too much to one side the issue of safety and its corollary, regulation and standards.
The European Commission has laid the foundations for moving forward and building solid approaches to the integration of AI-based applications that are safe, trustworthy and respect European ethical values.
arXiv Detail & Related papers (2023-09-27T09:08:41Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Negative Human Rights as a Basis for Long-term AI Safety and Regulation [1.5229257192293197]
General principles guiding autonomous AI systems to recognize and avoid harmful behaviours may need to be supported by a binding system of regulation.
They should also be specific enough for technical implementation.
This article draws inspiration from law to explain how negative human rights could fulfil the role of such principles.
arXiv Detail & Related papers (2022-08-31T11:57:13Z) - AdvSim: Generating Safety-Critical Scenarios for Self-Driving Vehicles [76.46575807165729]
We propose AdvSim, an adversarial framework to generate safety-critical scenarios for any LiDAR-based autonomy system.
By simulating directly from sensor data, we obtain adversarial scenarios that are safety-critical for the full autonomy stack.
arXiv Detail & Related papers (2021-01-16T23:23:12Z) - Safe Reinforcement Learning via Curriculum Induction [94.67835258431202]
In safety-critical applications, autonomous agents may need to learn in an environment where mistakes can be very costly.
Existing safe reinforcement learning methods make an agent rely on priors that let it avoid dangerous situations.
This paper presents an alternative approach inspired by human teaching, where an agent learns under the supervision of an automatic instructor.
arXiv Detail & Related papers (2020-06-22T10:48:17Z) - Towards a Framework for Certification of Reliable Autonomous Systems [3.3861948721202233]
A computational system is autonomous if it is able to make its own decisions, or take its own actions, without human supervision or control.
Regulators grapple with how to deal with autonomous systems, for example how could we certify an Unmanned Aerial System for autonomous use in civilian airspace?
We here analyse what is needed in order to provide verified reliable behaviour of an autonomous system.
We propose a roadmap towards developing regulatory guidelines, including articulating challenges to researchers, to engineers, and to regulators.
arXiv Detail & Related papers (2020-01-24T18:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.