The Ecosystem of Trust (EoT): Enabling effective deployment of
autonomous systems through collaborative and trusted ecosystems
- URL: http://arxiv.org/abs/2312.00629v1
- Date: Fri, 1 Dec 2023 14:47:36 GMT
- Title: The Ecosystem of Trust (EoT): Enabling effective deployment of
autonomous systems through collaborative and trusted ecosystems
- Authors: Jon Arne Glomsrud and Tita Alissa Bach (Group Research and
Development, DNV, H{\o}vik, Norway)
- Abstract summary: We propose an ecosystem of trust approach to support deployment of technology.
We argue that assurance, defined as grounds for justified confidence, is a prerequisite to enable the approach.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ecosystems are ubiquitous but trust within them is not guaranteed. Trust is
paramount because stakeholders within an ecosystem must collaborate to achieve
their objectives. With the twin transitions, digital transformation to go in
parallel with green transition, accelerating the deployment of autonomous
systems, trust has become even more critical to ensure that the deployed
technology creates value. To address this need, we propose an ecosystem of
trust approach to support deployment of technology by enabling trust among and
between stakeholders, technologies and infrastructures, institutions and
governance, and the artificial and natural environments in an ecosystem. The
approach can help the stakeholders in the ecosystem to create, deliver, and
receive value by addressing their concerns and aligning their objectives. We
present an autonomous, zero emission ferry as a real world use case to
demonstrate the approach from a stakeholder perspective. We argue that
assurance, defined as grounds for justified confidence originated from evidence
and knowledge, is a prerequisite to enable the approach. Assurance provides
evidence and knowledge that are collected, analysed, and communicated in a
systematic, targeted, and meaningful way. Assurance can enable the approach to
help successfully deploy technology by ensuring that risk is managed, trust is
shared, and value is created.
Related papers
- Reliability, Resilience and Human Factors Engineering for Trustworthy AI Systems [6.120331132009475]
We offer a framework that integrates established reliability and resilience engineering principles into AI systems.
We propose an integrate framework to manage AI system performance, and prevent or efficiently recover from failures.
We apply our framework to a real-world AI system, using system status data from platforms such as openAI, to demonstrate its practical applicability.
arXiv Detail & Related papers (2024-11-13T19:16:44Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Distributed Trust Through the Lens of Software Architecture [13.732161898452377]
This paper will survey the concept of distributed trust in multiple disciplines.
It will take a system/software architecture point of view to look at trust redistribution/shift and the associated tradeoffs in systems and applications enabled by distributed trust technologies.
arXiv Detail & Related papers (2023-05-25T06:53:18Z) - Collective Reasoning for Safe Autonomous Systems [0.0]
We introduce the idea of increasing the reliability of autonomous systems by relying on collective intelligence.
We define and formalize at design rules for collective reasoning to achieve collaboratively increased safety, trustworthiness and good decision making.
arXiv Detail & Related papers (2023-05-18T20:37:32Z) - Auditing and Generating Synthetic Data with Controllable Trust Trade-offs [54.262044436203965]
We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models.
It focuses on preventing bias and discrimination, ensures fidelity to the source data, assesses utility, robustness, and privacy preservation.
We demonstrate the framework's effectiveness by auditing various generative models across diverse use cases.
arXiv Detail & Related papers (2023-04-21T09:03:18Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Systems Challenges for Trustworthy Embodied Systems [0.0]
A new generation of increasingly autonomous and self-learning systems, which we call embodied systems, is about to be developed.
It is crucial to coordinate the behavior of embodied systems in a beneficial manner, ensure their compatibility with our human-centered social values, and design verifiably safe and reliable human-machine interaction.
We are arguing that raditional systems engineering is coming to a climacteric from embedded to embodied systems, and with assuring the trustworthiness of dynamic federations of situationally aware, intent-driven, explorative, ever-evolving, largely non-predictable, and increasingly autonomous embodied systems in
arXiv Detail & Related papers (2022-01-10T15:52:17Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Towards a Policy-as-a-Service Framework to Enable Compliant, Trustworthy
AI and HRI Systems in the Wild [7.225523345649149]
Building trustworthy autonomous systems is challenging for many reasons beyond simply trying to engineer agents that 'always do the right thing'
There is a broader context that is often not considered within AI and HRI: that the problem of trustworthiness is inherently socio-technical.
This paper emphasizes the "fuzzy" socio-technical aspects of trustworthiness and the need for their careful consideration during both design and deployment.
arXiv Detail & Related papers (2020-10-06T18:32:31Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.