Think About the Stakeholders First! Towards an Algorithmic Transparency
Playbook for Regulatory Compliance
- URL: http://arxiv.org/abs/2207.01482v1
- Date: Fri, 10 Jun 2022 09:39:00 GMT
- Title: Think About the Stakeholders First! Towards an Algorithmic Transparency
Playbook for Regulatory Compliance
- Authors: Andrew Bell, Oded Nov, Julia Stoyanovich
- Abstract summary: Laws are being proposed and passed by governments around the world to regulate Artificial Intelligence (AI) systems implemented into the public and private sectors.
Many of these regulations address the transparency of AI systems, and related citizen-aware issues like allowing individuals to have the right to an explanation about how an AI system makes a decision that impacts them.
We propose a novel stakeholder-first approach that assists technologists in designing transparent, regulatory compliant systems.
- Score: 14.043062659347427
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Increasingly, laws are being proposed and passed by governments around the
world to regulate Artificial Intelligence (AI) systems implemented into the
public and private sectors. Many of these regulations address the transparency
of AI systems, and related citizen-aware issues like allowing individuals to
have the right to an explanation about how an AI system makes a decision that
impacts them. Yet, almost all AI governance documents to date have a
significant drawback: they have focused on what to do (or what not to do) with
respect to making AI systems transparent, but have left the brunt of the work
to technologists to figure out how to build transparent systems. We fill this
gap by proposing a novel stakeholder-first approach that assists technologists
in designing transparent, regulatory compliant systems. We also describe a
real-world case-study that illustrates how this approach can be used in
practice.
Related papers
- Participatory Approaches in AI Development and Governance: A Principled Approach [9.271573427680087]
This paper forms the first part of a two-part series on participatory governance in AI.
It advances the premise that a participatory approach is beneficial to building and using more responsible, safe, and human-centric AI systems.
arXiv Detail & Related papers (2024-06-03T09:49:42Z) - Artificial General Intelligence (AGI)-Native Wireless Systems: A Journey Beyond 6G [58.440115433585824]
Building future wireless systems that support services like digital twins (DTs) is challenging to achieve through advances to conventional technologies like meta-surfaces.
While artificial intelligence (AI)-native networks promise to overcome some limitations of wireless technologies, developments still rely on AI tools like neural networks.
This paper revisits the concept of AI-native wireless systems, equipping them with the common sense necessary to transform them into artificial general intelligence (AGI)-native systems.
arXiv Detail & Related papers (2024-04-29T04:51:05Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Trust, Accountability, and Autonomy in Knowledge Graph-based AI for
Self-determination [1.4305544869388402]
Knowledge Graphs (KGs) have emerged as fundamental platforms for powering intelligent decision-making.
The integration of KGs with neuronal learning is currently a topic of active research.
This paper conceptualises the foundational topics and research pillars to support KG-based AI for self-determination.
arXiv Detail & Related papers (2023-10-30T12:51:52Z) - Path To Gain Functional Transparency In Artificial Intelligence With
Meaningful Explainability [0.0]
As AI systems become increasingly sophisticated, ensuring their transparency and explainability becomes crucial.
We propose a design for user-centered compliant-by-design transparency in transparent systems.
By providing a comprehensive understanding of the challenges associated with transparency in AI systems, we aim to facilitate the development of AI systems that are accountable, trustworthy, and aligned with societal values.
arXiv Detail & Related papers (2023-10-13T04:25:30Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Governance of Ethical and Trustworthy AI Systems: Research Gaps in the
ECCOLA Method [5.28595286827031]
This research analyzes the ECCOLA method for developing ethical and trustworthy AI systems.
The results demonstrate that while ECCOLA fully facilitates AI governance in corporate governance practices in all its processes, some of its practices do not fully foster data governance and information governance practices.
arXiv Detail & Related papers (2021-11-11T13:54:31Z) - Trustworthy AI: From Principles to Practices [44.67324097900778]
Many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc.
In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.
To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems.
arXiv Detail & Related papers (2021-10-04T03:20:39Z) - Signs for Ethical AI: A Route Towards Transparency [0.0]
We propose a transparency scheme to be implemented on any AI system open to the public.
The first recognizes the relevance of data for AI, and is supported by Privacy.
The second considers aspects of AI transparency currently unregulated: AI capabilities, purpose and source.
arXiv Detail & Related papers (2020-09-29T08:49:44Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.