Accountability in AI: From Principles to Industry-specific Accreditation
- URL: http://arxiv.org/abs/2110.09232v1
- Date: Fri, 8 Oct 2021 16:37:11 GMT
- Title: Accountability in AI: From Principles to Industry-specific Accreditation
- Authors: Chris Percy, Simo Dragicevic, Sanjoy Sarkar, Artur S. d'Avila Garcez
- Abstract summary: Recent AI-related scandals have shed a spotlight on accountability in AI.
This paper draws on literature from public policy and governance to make two contributions.
- Score: 4.033641609534416
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent AI-related scandals have shed a spotlight on accountability in AI,
with increasing public interest and concern. This paper draws on literature
from public policy and governance to make two contributions. First, we propose
an AI accountability ecosystem as a useful lens on the system, with different
stakeholders requiring and contributing to specific accountability mechanisms.
We argue that the present ecosystem is unbalanced, with a need for improved
transparency via AI explainability and adequate documentation and process
formalisation to support internal audit, leading up eventually to external
accreditation processes. Second, we use a case study in the gambling sector to
illustrate in a subset of the overall ecosystem the need for industry-specific
accountability principles and processes. We define and evaluate critically the
implementation of key accountability principles in the gambling industry,
namely addressing algorithmic bias and model explainability, before concluding
and discussing directions for future work based on our findings. Keywords:
Accountability, Explainable AI, Algorithmic Bias, Regulation.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - AI Governance and Accountability: An Analysis of Anthropic's Claude [0.0]
This paper examines the AI governance landscape, focusing on Anthropic's Claude, a foundational AI model.
We analyze Claude through the lens of the NIST AI Risk Management Framework and the EU AI Act, identifying potential threats and proposing mitigation strategies.
arXiv Detail & Related papers (2024-05-02T23:37:06Z) - AI auditing: The Broken Bus on the Road to AI Accountability [1.9758196889515185]
"AI audit" ecosystem is muddled and imprecise, making it difficult to work through various concepts and map out the stakeholders involved in the practice.
First, we taxonomize current AI audit practices as completed by regulators, law firms, civil society, journalism, academia, consulting agencies.
We find that only a subset of AI audit studies translate to desired accountability outcomes.
arXiv Detail & Related papers (2024-01-25T19:00:29Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Responsible AI Governance: A Systematic Literature Review [8.318630741859113]
This paper aims to examine the existing literature on AI Governance.
The focus of this study is to analyse the literature to answer key questions: WHO is accountable for AI systems' governance, WHAT elements are being governed, WHEN governance occurs within the AI development life cycle, and HOW it is executed through various mechanisms like frameworks, tools, standards, policies, or models.
The findings of this study provides a foundational basis for future research and development of comprehensive governance models that align with RAI principles.
arXiv Detail & Related papers (2023-12-18T05:22:36Z) - Explainability in AI Policies: A Critical Review of Communications,
Reports, Regulations, and Standards in the EU, US, and UK [1.5039745292757671]
We perform the first thematic and gap analysis of policies and standards on explainability in the EU, US, and UK.
We find that policies are often informed by coarse notions and requirements for explanations.
We propose recommendations on how to address explainability in regulations for AI systems.
arXiv Detail & Related papers (2023-04-20T07:53:07Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Towards Accountability in the Use of Artificial Intelligence for Public
Administrations [0.0]
We argue that the phenomena of distributed responsibility, induced acceptance, and acceptance through ignorance constitute instances of imperfect delegation when tasks are delegated to computationally-driven systems.
We hold that both direct public accountability via public transparency and indirect public accountability via transparency to auditors in public organizations can be both instrumentally ethically valuable and required as a matter of deontology from the principle of democratic self-government.
arXiv Detail & Related papers (2021-05-04T11:50:04Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.