Identifying Roles, Requirements and Responsibilities in Trustworthy AI
Systems
- URL: http://arxiv.org/abs/2106.08258v1
- Date: Tue, 15 Jun 2021 16:05:10 GMT
- Title: Identifying Roles, Requirements and Responsibilities in Trustworthy AI
Systems
- Authors: Iain Barclay, Will Abramson
- Abstract summary: We consider an AI system from the domain practitioner's perspective and identify key roles that are involved in system deployment.
We consider the differing requirements and responsibilities of each role, and identify a tension between transparency and privacy that needs to be addressed.
- Score: 2.28438857884398
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Artificial Intelligence (AI) systems are being deployed around the globe in
critical fields such as healthcare and education. In some cases, expert
practitioners in these domains are being tasked with introducing or using such
systems, but have little or no insight into what data these complex systems are
based on, or how they are put together. In this paper, we consider an AI system
from the domain practitioner's perspective and identify key roles that are
involved in system deployment. We consider the differing requirements and
responsibilities of each role, and identify a tension between transparency and
privacy that needs to be addressed so that domain practitioners are able to
intelligently assess whether a particular AI system is appropriate for use in
their domain.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Security Challenges in Autonomous Systems Design [1.864621482724548]
With the independence from human control, cybersecurity of such systems becomes even more critical.
With the independence from human control, cybersecurity of such systems becomes even more critical.
This paper thoroughly discusses the state of the art, identifies emerging security challenges and proposes research directions.
arXiv Detail & Related papers (2023-11-05T09:17:39Z) - Stronger Together: on the Articulation of Ethical Charters, Legal Tools,
and Technical Documentation in ML [5.433040083728602]
The need for accountability of the people behind AI systems can be addressed by leveraging processes in three fields of study: ethics, law, and computer science.
We first contrast notions of compliance in the ethical, legal, and technical fields.
We then focus on the role of values in articulating the synergies between the fields.
arXiv Detail & Related papers (2023-05-09T15:35:31Z) - Knowledge-intensive Language Understanding for Explainable AI [9.541228711585886]
How AI-led decisions are made and what determining factors were included are crucial to understand.
It is critical to have human-centered explanations that are directly related to decision-making.
It is necessary to involve explicit domain knowledge that humans understand and use.
arXiv Detail & Related papers (2021-08-02T21:12:30Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - Trustworthy AI in the Age of Pervasive Computing and Big Data [22.92621391190282]
We formalise the requirements of trustworthy AI systems through an ethics perspective.
After discussing the state of research and the remaining challenges, we show how a concrete use-case in smart cities can benefit from these methods.
arXiv Detail & Related papers (2020-01-30T08:09:31Z) - AAAI FSS-19: Human-Centered AI: Trustworthiness of AI Models and Data
Proceedings [8.445274192818825]
It is crucial for predictive models to be uncertainty-aware and yield trustworthy predictions.
The focus of this symposium was on AI systems to improve data quality and technical robustness and safety.
submissions from broadly defined areas also discussed approaches addressing requirements such as explainable models, human trust and ethical aspects of AI.
arXiv Detail & Related papers (2020-01-15T15:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.