Autonomy and Safety Assurance in the Early Development of Robotics and Autonomous Systems
- URL: http://arxiv.org/abs/2501.18448v1
- Date: Thu, 30 Jan 2025 16:00:26 GMT
- Title: Autonomy and Safety Assurance in the Early Development of Robotics and Autonomous Systems
- Authors: Dhaminda B. Abeywickrama, Michael Fisher, Frederic Wheeler, Louise Dennis,
- Abstract summary: CRADLE aims to make assurance an integral part of engineering reliable, transparent, and trustworthy autonomous systems.
Workshop brought together representatives from six regulatory and assurance bodies across diverse sectors.
Key discussions revolved around three research questions: (i) challenges in assuring safety for AIR; (ii) evidence for safety assurance; and (iii) how assurance cases need to differ for autonomous systems.
- Score: 0.8999666725996975
- License:
- Abstract: This report provides an overview of the workshop titled Autonomy and Safety Assurance in the Early Development of Robotics and Autonomous Systems, hosted by the Centre for Robotic Autonomy in Demanding and Long-Lasting Environments (CRADLE) on September 2, 2024, at The University of Manchester, UK. The event brought together representatives from six regulatory and assurance bodies across diverse sectors to discuss challenges and evidence for ensuring the safety of autonomous and robotic systems, particularly autonomous inspection robots (AIR). The workshop featured six invited talks by the regulatory and assurance bodies. CRADLE aims to make assurance an integral part of engineering reliable, transparent, and trustworthy autonomous systems. Key discussions revolved around three research questions: (i) challenges in assuring safety for AIR; (ii) evidence for safety assurance; and (iii) how assurance cases need to differ for autonomous systems. Following the invited talks, the breakout groups further discussed the research questions using case studies from ground (rail), nuclear, underwater, and drone-based AIR. This workshop offered a valuable opportunity for representatives from industry, academia, and regulatory bodies to discuss challenges related to assured autonomy. Feedback from participants indicated a strong willingness to adopt a design-for-assurance process to ensure that robots are developed and verified to meet regulatory expectations.
Related papers
- Generative AI Agents in Autonomous Machines: A Safety Perspective [9.02400798202199]
generative AI agents provide unparalleled capabilities, but they also have unique safety concerns.
This work investigates the evolving safety requirements when generative models are integrated as agents into physical autonomous machines.
We recommend the development and implementation of comprehensive safety scorecards for the use of generative AI technologies in autonomous machines.
arXiv Detail & Related papers (2024-10-20T20:07:08Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Security Challenges in Autonomous Systems Design [1.864621482724548]
With the independence from human control, cybersecurity of such systems becomes even more critical.
With the independence from human control, cybersecurity of such systems becomes even more critical.
This paper thoroughly discusses the state of the art, identifies emerging security challenges and proposes research directions.
arXiv Detail & Related papers (2023-11-05T09:17:39Z) - Autonomous Vehicles an overview on system, cyber security, risks,
issues, and a way forward [0.0]
This chapter explores the complex realm of autonomous cars, analyzing their fundamental components and operational characteristics.
The primary focus of this investigation lies in the realm of cybersecurity, specifically in the context of autonomous vehicles.
A comprehensive analysis will be conducted to explore various risk management solutions aimed at protecting these vehicles from potential threats.
arXiv Detail & Related papers (2023-09-25T15:19:09Z) - Assurance for Autonomy -- JPL's past research, lessons learned, and
future directions [56.32768279109502]
Autonomy is required when a wide variation in circumstances precludes responses being pre-planned.
Mission assurance is a key contributor to providing confidence, yet assurance practices honed over decades of spaceflight have relatively little experience with autonomy.
Researchers in JPL's software assurance group have been involved in the development of techniques specific to the assurance of autonomy.
arXiv Detail & Related papers (2023-05-16T18:24:12Z) - Artificial Intelligence Security Competition (AISC) [52.20676747225118]
The Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI.
The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition.
This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.
arXiv Detail & Related papers (2022-12-07T02:45:27Z) - Assured Autonomy: Path Toward Living With Autonomous Systems We Can
Trust [17.71048945905425]
Autonomy is a broad and expansive capability that enables systems to behave without direct control by a human operator.
The first workshop, held in October 2019, focused on current and anticipated challenges and problems in assuring autonomous systems.
The second workshop held in February 2020, focused on existing capabilities, current research, and research trends that could address the challenges and problems identified in workshop.
arXiv Detail & Related papers (2020-10-27T17:00:01Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - Towards a Framework for Certification of Reliable Autonomous Systems [3.3861948721202233]
A computational system is autonomous if it is able to make its own decisions, or take its own actions, without human supervision or control.
Regulators grapple with how to deal with autonomous systems, for example how could we certify an Unmanned Aerial System for autonomous use in civilian airspace?
We here analyse what is needed in order to provide verified reliable behaviour of an autonomous system.
We propose a roadmap towards developing regulatory guidelines, including articulating challenges to researchers, to engineers, and to regulators.
arXiv Detail & Related papers (2020-01-24T18:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.