Designing Fiduciary Artificial Intelligence
- URL: http://arxiv.org/abs/2308.02435v1
- Date: Thu, 27 Jul 2023 15:35:32 GMT
- Title: Designing Fiduciary Artificial Intelligence
- Authors: Sebastian Benthall and David Shekman
- Abstract summary: This article synthesizes recent work in computer science and law to develop a procedure for designing and auditing Fiduciary AI.
The designer of a Fiduciary AI should understand the context of the system, identify its principals, and assess the best interests of those principals.
We connect the steps in this procedure to dimensions of Trustworthy AI, such as privacy and alignment.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A fiduciary is a trusted agent that has the legal duty to act with loyalty
and care towards a principal that employs them. When fiduciary organizations
interact with users through a digital interface, or otherwise automate their
operations with artificial intelligence, they will need to design these AI
systems to be compliant with their duties. This article synthesizes recent work
in computer science and law to develop a procedure for designing and auditing
Fiduciary AI. The designer of a Fiduciary AI should understand the context of
the system, identify its principals, and assess the best interests of those
principals. Then the designer must be loyal with respect to those interests,
and careful in an contextually appropriate way. We connect the steps in this
procedure to dimensions of Trustworthy AI, such as privacy and alignment.
Fiduciary AI is a promising means to address the incompleteness of data
subject's consent when interacting with complex technical systems.
Related papers
- Who to Trust, How and Why: Untangling AI Ethics Principles,
Trustworthiness and Trust [0.0]
We argue for the need to distinguish these concepts more clearly.
We discuss that trust in AI involves not only reliance on the system itself, but also trust in the developers of the AI system.
arXiv Detail & Related papers (2023-09-19T05:00:34Z) - Responsible Artificial Intelligence -- from Principles to Practice [5.5586788751870175]
AI is changing the way we work, live and solve challenges.
But concerns about fairness, transparency or privacy are also growing.
Ensuring responsible, ethical AI is more than designing systems whose result can be trusted.
arXiv Detail & Related papers (2022-05-22T09:28:54Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Enabling Automated Machine Learning for Model-Driven AI Engineering [60.09869520679979]
We propose a novel approach to enable Model-Driven Software Engineering and Model-Driven AI Engineering.
In particular, we support Automated ML, thus assisting software engineers without deep AI knowledge in developing AI-intensive systems.
arXiv Detail & Related papers (2022-03-06T10:12:56Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Towards a Policy-as-a-Service Framework to Enable Compliant, Trustworthy
AI and HRI Systems in the Wild [7.225523345649149]
Building trustworthy autonomous systems is challenging for many reasons beyond simply trying to engineer agents that 'always do the right thing'
There is a broader context that is often not considered within AI and HRI: that the problem of trustworthiness is inherently socio-technical.
This paper emphasizes the "fuzzy" socio-technical aspects of trustworthiness and the need for their careful consideration during both design and deployment.
arXiv Detail & Related papers (2020-10-06T18:32:31Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - AI loyalty: A New Paradigm for Aligning Stakeholder Interests [0.0]
We argue that AI loyalty should be considered during the technological design process alongside other important values in AI ethics.
We discuss a range of mechanisms that could support incorporation of AI loyalty into a variety of future AI systems.
arXiv Detail & Related papers (2020-03-24T23:55:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.