Statutory Professions in AI governance and their consequences for
explainable AI
- URL: http://arxiv.org/abs/2306.08959v1
- Date: Thu, 15 Jun 2023 08:51:28 GMT
- Title: Statutory Professions in AI governance and their consequences for
explainable AI
- Authors: Labhaoise NiFhaolain, Andrew Hines, Vivek Nallur
- Abstract summary: Intentional and accidental harms arising from the use of AI have impacted the health, safety and rights of individuals.
We propose that a statutory profession framework be introduced as a necessary part of the AI regulatory framework.
- Score: 2.363388546004777
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intentional and accidental harms arising from the use of AI have impacted the
health, safety and rights of individuals. While regulatory frameworks are being
developed, there remains a lack of consensus on methods necessary to deliver
safe AI. The potential for explainable AI (XAI) to contribute to the
effectiveness of the regulation of AI is being increasingly examined.
Regulation must include methods to ensure compliance on an ongoing basis,
though there is an absence of practical proposals on how to achieve this. For
XAI to be successfully incorporated into a regulatory system, the individuals
who are engaged in interpreting/explaining the model to stakeholders should be
sufficiently qualified for the role. Statutory professionals are prevalent in
domains in which harm can be done to the health, safety and rights of
individuals. The most obvious examples are doctors, engineers and lawyers.
Those professionals are required to exercise skill and judgement and to defend
their decision making process in the event of harm occurring. We propose that a
statutory profession framework be introduced as a necessary part of the AI
regulatory framework for compliance and monitoring purposes. We will refer to
this new statutory professional as an AI Architect (AIA). This AIA would be
responsible to ensure the risk of harm is minimised and accountable in the
event that harms occur. The AIA would also be relied on to provide appropriate
interpretations/explanations of XAI models to stakeholders. Further, in order
to satisfy themselves that the models have been developed in a satisfactory
manner, the AIA would require models to have appropriate transparency.
Therefore it is likely that the introduction of an AIA system would lead to an
increase in the use of XAI to enable AIA to discharge their professional
obligations.
Related papers
- Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - An FDA for AI? Pitfalls and Plausibility of Approval Regulation for Frontier Artificial Intelligence [0.0]
We explore the applicability of approval regulation -- that is, regulation of a product that combines experimental minima with government licensure conditioned partially or fully upon that experimentation -- to the regulation of frontier AI.
There are a number of reasons to believe that approval regulation, simplistically applied, would be inapposite for frontier AI risks.
We conclude by highlighting the role of policy learning and experimentation in regulatory development.
arXiv Detail & Related papers (2024-08-01T17:54:57Z) - AI Cards: Towards an Applied Framework for Machine-Readable AI and Risk Documentation Inspired by the EU AI Act [2.1897070577406734]
Despite its importance, there is a lack of standards and guidelines to assist with drawing up AI and risk documentation aligned with the AI Act.
We propose AI Cards as a novel holistic framework for representing a given intended use of an AI system.
arXiv Detail & Related papers (2024-06-26T09:51:49Z) - False Sense of Security in Explainable Artificial Intelligence (XAI) [3.298597939573779]
We argue that AI regulations and current market conditions threaten effective AI governance and safety.
Unless governments explicitly tackle the issue of explainability through clear legislative and policy statements, AI governance risks becoming a vacuous "box-ticking" exercise.
arXiv Detail & Related papers (2024-05-06T20:02:07Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Is the U.S. Legal System Ready for AI's Challenges to Human Values? [16.510834081597377]
This study investigates how effectively U.S. laws confront the challenges posed by Generative AI to human values.
We identify notable gaps and uncertainties within the existing legal framework regarding the protection of fundamental values.
We advocate for legal frameworks that evolve to recognize new threats and provide proactive, auditable guidelines to industry stakeholders.
arXiv Detail & Related papers (2023-08-30T09:19:06Z) - AI Liability Insurance With an Example in AI-Powered E-diagnosis System [22.102728605081534]
We use an AI-powered E-diagnosis system as an example to study AI liability insurance.
We show that AI liability insurance can act as a regulatory mechanism to incentivize compliant behaviors and serve as a certificate of high-quality AI systems.
arXiv Detail & Related papers (2023-06-01T21:03:47Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.