Towards a Policy-as-a-Service Framework to Enable Compliant, Trustworthy
AI and HRI Systems in the Wild
- URL: http://arxiv.org/abs/2010.07022v1
- Date: Tue, 6 Oct 2020 18:32:31 GMT
- Title: Towards a Policy-as-a-Service Framework to Enable Compliant, Trustworthy
AI and HRI Systems in the Wild
- Authors: Alexis Morris and Hallie Siegel and Jonathan Kelly
- Abstract summary: Building trustworthy autonomous systems is challenging for many reasons beyond simply trying to engineer agents that 'always do the right thing'
There is a broader context that is often not considered within AI and HRI: that the problem of trustworthiness is inherently socio-technical.
This paper emphasizes the "fuzzy" socio-technical aspects of trustworthiness and the need for their careful consideration during both design and deployment.
- Score: 7.225523345649149
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building trustworthy autonomous systems is challenging for many reasons
beyond simply trying to engineer agents that 'always do the right thing.' There
is a broader context that is often not considered within AI and HRI: that the
problem of trustworthiness is inherently socio-technical and ultimately
involves a broad set of complex human factors and multidimensional
relationships that can arise between agents, humans, organizations, and even
governments and legal institutions, each with their own understanding and
definitions of trust. This complexity presents a significant barrier to the
development of trustworthy AI and HRI systems---while systems developers may
desire to have their systems 'always do the right thing,' they generally lack
the practical tools and expertise in law, regulation, policy and ethics to
ensure this outcome. In this paper, we emphasize the "fuzzy" socio-technical
aspects of trustworthiness and the need for their careful consideration during
both design and deployment. We hope to contribute to the discussion of
trustworthy engineering in AI and HRI by i) describing the policy landscape
that must be considered when addressing trustworthy computing and the need for
usable trust models, ii) highlighting an opportunity for trustworthy-by-design
intervention within the systems engineering process, and iii) introducing the
concept of a "policy-as-a-service" (PaaS) framework that can be readily applied
by AI systems engineers to address the fuzzy problem of trust during the
development and (eventually) runtime process. We envision that the PaaS
approach, which offloads the development of policy design parameters and
maintenance of policy standards to policy experts, will enable runtime trust
capabilities intelligent systems in the wild.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Towards Trustworthy AI: A Review of Ethical and Robust Large Language Models [1.7466076090043157]
Large Language Models (LLMs) could transform many fields, but their fast development creates significant challenges for oversight, ethical creation, and building user trust.
This comprehensive review looks at key trust issues in LLMs, such as unintended harms, lack of transparency, vulnerability to attacks, alignment with human values, and environmental impact.
To tackle these issues, we suggest combining ethical oversight, industry accountability, regulation, and public involvement.
arXiv Detail & Related papers (2024-06-01T14:47:58Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - No Trust without regulation! [0.0]
The explosion in performance of Machine Learning (ML) and the potential of its applications are encouraging us to consider its use in industrial systems.
It is still leaving too much to one side the issue of safety and its corollary, regulation and standards.
The European Commission has laid the foundations for moving forward and building solid approaches to the integration of AI-based applications that are safe, trustworthy and respect European ethical values.
arXiv Detail & Related papers (2023-09-27T09:08:41Z) - Who to Trust, How and Why: Untangling AI Ethics Principles,
Trustworthiness and Trust [0.0]
We argue for the need to distinguish these concepts more clearly.
We discuss that trust in AI involves not only reliance on the system itself, but also trust in the developers of the AI system.
arXiv Detail & Related papers (2023-09-19T05:00:34Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Trustworthy AI: From Principles to Practices [44.67324097900778]
Many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc.
In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.
To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems.
arXiv Detail & Related papers (2021-10-04T03:20:39Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.