The Sanction of Authority: Promoting Public Trust in AI
- URL: http://arxiv.org/abs/2102.04221v1
- Date: Fri, 22 Jan 2021 22:01:30 GMT
- Title: The Sanction of Authority: Promoting Public Trust in AI
- Authors: Bran Knowles and John T. Richards
- Abstract summary: We argue that public distrust of AI originates from the under-development of a regulatory ecosystem that would guarantee the trustworthiness of the AIs that pervade society.
We elaborate the pivotal role of externally auditable AI documentation within this model and the work to be done to ensure it is effective.
- Score: 4.729969944853141
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trusted AI literature to date has focused on the trust needs of users who
knowingly interact with discrete AIs. Conspicuously absent from the literature
is a rigorous treatment of public trust in AI. We argue that public distrust of
AI originates from the under-development of a regulatory ecosystem that would
guarantee the trustworthiness of the AIs that pervade society. Drawing from
structuration theory and literature on institutional trust, we offer a model of
public trust in AI that differs starkly from models driving Trusted AI efforts.
This model provides a theoretical scaffolding for Trusted AI research which
underscores the need to develop nothing less than a comprehensive and visibly
functioning regulatory ecosystem. We elaborate the pivotal role of externally
auditable AI documentation within this model and the work to be done to ensure
it is effective, and outline a number of actions that would promote public
trust in AI. We discuss how existing efforts to develop AI documentation within
organizations -- both to inform potential adopters of AI components and support
the deliberations of risk and ethics review boards -- is necessary but
insufficient assurance of the trustworthiness of AI. We argue that being
accountable to the public in ways that earn their trust, through elaborating
rules for AI and developing resources for enforcing these rules, is what will
ultimately make AI trustworthy enough to be woven into the fabric of our
society.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Responsible Artificial Intelligence: A Structured Literature Review [0.0]
The EU has recently issued several publications emphasizing the necessity of trust in AI.
This highlights the urgent need for international regulation.
This paper introduces a comprehensive and, to our knowledge, the first unified definition of responsible AI.
arXiv Detail & Related papers (2024-03-11T17:01:13Z) - Who to Trust, How and Why: Untangling AI Ethics Principles,
Trustworthiness and Trust [0.0]
We argue for the need to distinguish these concepts more clearly.
We discuss that trust in AI involves not only reliance on the system itself, but also trust in the developers of the AI system.
arXiv Detail & Related papers (2023-09-19T05:00:34Z) - Never trust, always verify : a roadmap for Trustworthy AI? [12.031113181911627]
We examine trust in the context of AI-based systems to understand what it means for an AI system to be trustworthy.
We suggest a trust (resp. zero-trust) model for AI and suggest a set of properties that should be satisfied to ensure the trustworthiness of AI systems.
arXiv Detail & Related papers (2022-06-23T21:13:10Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Filling gaps in trustworthy development of AI [20.354549569362035]
Growing awareness of potential risks from AI systems has spurred action to address those risks.
But the principles often leave a gap between the "what" and the "how" of trustworthy AI development.
There is thus an urgent need for concrete methods that both enable AI developers to prevent harm and allow them to demonstrate their trustworthiness.
arXiv Detail & Related papers (2021-12-14T22:45:28Z) - Trustworthy AI: From Principles to Practices [44.67324097900778]
Many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc.
In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.
To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems.
arXiv Detail & Related papers (2021-10-04T03:20:39Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.