Systems Challenges for Trustworthy Embodied Systems
- URL: http://arxiv.org/abs/2201.03413v1
- Date: Mon, 10 Jan 2022 15:52:17 GMT
- Title: Systems Challenges for Trustworthy Embodied Systems
- Authors: Harald Ruess
- Abstract summary: A new generation of increasingly autonomous and self-learning systems, which we call embodied systems, is about to be developed.
It is crucial to coordinate the behavior of embodied systems in a beneficial manner, ensure their compatibility with our human-centered social values, and design verifiably safe and reliable human-machine interaction.
We are arguing that raditional systems engineering is coming to a climacteric from embedded to embodied systems, and with assuring the trustworthiness of dynamic federations of situationally aware, intent-driven, explorative, ever-evolving, largely non-predictable, and increasingly autonomous embodied systems in
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A new generation of increasingly autonomous and self-learning systems, which
we call embodied systems, is about to be developed. When deploying these
systems into a real-life context we face various engineering challenges, as it
is crucial to coordinate the behavior of embodied systems in a beneficial
manner, ensure their compatibility with our human-centered social values, and
design verifiably safe and reliable human-machine interaction. We are arguing
that raditional systems engineering is coming to a climacteric from embedded to
embodied systems, and with assuring the trustworthiness of dynamic federations
of situationally aware, intent-driven, explorative, ever-evolving, largely
non-predictable, and increasingly autonomous embodied systems in uncertain,
complex, and unpredictable real-world contexts. We are also identifying a
number of urgent systems challenges for trustworthy embodied systems, including
robust and human-centric AI, cognitive architectures, uncertainty
quantification, trustworthy self-integration, and continual analysis and
assurance.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Security Challenges in Autonomous Systems Design [1.864621482724548]
With the independence from human control, cybersecurity of such systems becomes even more critical.
With the independence from human control, cybersecurity of such systems becomes even more critical.
This paper thoroughly discusses the state of the art, identifies emerging security challenges and proposes research directions.
arXiv Detail & Related papers (2023-11-05T09:17:39Z) - Future Vision of Dynamic Certification Schemes for Autonomous Systems [3.151005833357807]
We identify several issues with the current certification strategies that could pose serious safety risks.
We highlight the inadequate reflection of software changes in constantly evolving systems and the lack of support for systems' cooperation.
Other shortcomings include the narrow focus of awarded certification, neglecting aspects such as the ethical behavior of autonomous software systems.
arXiv Detail & Related papers (2023-08-20T19:06:57Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Collective Reasoning for Safe Autonomous Systems [0.0]
We introduce the idea of increasing the reliability of autonomous systems by relying on collective intelligence.
We define and formalize at design rules for collective reasoning to achieve collaboratively increased safety, trustworthiness and good decision making.
arXiv Detail & Related papers (2023-05-18T20:37:32Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z) - A Structured Approach to Trustworthy Autonomous/Cognitive Systems [4.56877715768796]
There is no generally accepted approach to ensure trustworthiness.
This paper presents a framework to exactly fill this gap.
It proposes a reference lifecycle as a structured approach that is based on current safety standards.
arXiv Detail & Related papers (2020-02-19T14:36:27Z) - AAAI FSS-19: Human-Centered AI: Trustworthiness of AI Models and Data
Proceedings [8.445274192818825]
It is crucial for predictive models to be uncertainty-aware and yield trustworthy predictions.
The focus of this symposium was on AI systems to improve data quality and technical robustness and safety.
submissions from broadly defined areas also discussed approaches addressing requirements such as explainable models, human trust and ethical aspects of AI.
arXiv Detail & Related papers (2020-01-15T15:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.