Statutory Professions in AI governance and their consequences for
explainable AI
- URL: http://arxiv.org/abs/2306.08959v1
- Date: Thu, 15 Jun 2023 08:51:28 GMT
- Title: Statutory Professions in AI governance and their consequences for
explainable AI
- Authors: Labhaoise NiFhaolain, Andrew Hines, Vivek Nallur
- Abstract summary: Intentional and accidental harms arising from the use of AI have impacted the health, safety and rights of individuals.
We propose that a statutory profession framework be introduced as a necessary part of the AI regulatory framework.
- Score: 2.363388546004777
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intentional and accidental harms arising from the use of AI have impacted the
health, safety and rights of individuals. While regulatory frameworks are being
developed, there remains a lack of consensus on methods necessary to deliver
safe AI. The potential for explainable AI (XAI) to contribute to the
effectiveness of the regulation of AI is being increasingly examined.
Regulation must include methods to ensure compliance on an ongoing basis,
though there is an absence of practical proposals on how to achieve this. For
XAI to be successfully incorporated into a regulatory system, the individuals
who are engaged in interpreting/explaining the model to stakeholders should be
sufficiently qualified for the role. Statutory professionals are prevalent in
domains in which harm can be done to the health, safety and rights of
individuals. The most obvious examples are doctors, engineers and lawyers.
Those professionals are required to exercise skill and judgement and to defend
their decision making process in the event of harm occurring. We propose that a
statutory profession framework be introduced as a necessary part of the AI
regulatory framework for compliance and monitoring purposes. We will refer to
this new statutory professional as an AI Architect (AIA). This AIA would be
responsible to ensure the risk of harm is minimised and accountable in the
event that harms occur. The AIA would also be relied on to provide appropriate
interpretations/explanations of XAI models to stakeholders. Further, in order
to satisfy themselves that the models have been developed in a satisfactory
manner, the AIA would require models to have appropriate transparency.
Therefore it is likely that the introduction of an AIA system would lead to an
increase in the use of XAI to enable AIA to discharge their professional
obligations.
Related papers
- AI Cards: Towards an Applied Framework for Machine-Readable AI and Risk Documentation Inspired by the EU AI Act [2.1897070577406734]
Despite its importance, there is a lack of standards and guidelines to assist with drawing up AI and risk documentation aligned with the AI Act.
We propose AI Cards as a novel holistic framework for representing a given intended use of an AI system.
arXiv Detail & Related papers (2024-06-26T09:51:49Z) - False Sense of Security in Explainable Artificial Intelligence (XAI) [3.298597939573779]
We argue that AI regulations and current market conditions threaten effective AI governance and safety.
Unless governments explicitly tackle the issue of explainability through clear legislative and policy statements, AI governance risks becoming a vacuous "box-ticking" exercise.
arXiv Detail & Related papers (2024-05-06T20:02:07Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
Particip-AI is a framework to gather current and future AI use cases and their harms and benefits from non-expert public.
We gather responses from 295 demographically diverse participants.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - A risk-based approach to assessing liability risk for AI-driven harms
considering EU liability directive [0.0]
Historical instances of harm caused by AI have led to European Union establishing an AI Liability Directive.
The future ability of provider to contest a product liability claim will depend on good practices adopted in designing, developing, and maintaining AI systems.
This paper provides a risk-based approach to examining liability for AI-driven injuries.
arXiv Detail & Related papers (2023-12-18T15:52:43Z) - Is the U.S. Legal System Ready for AI's Challenges to Human Values? [16.510834081597377]
This study investigates how effectively U.S. laws confront the challenges posed by Generative AI to human values.
We identify notable gaps and uncertainties within the existing legal framework regarding the protection of fundamental values.
We advocate for legal frameworks that evolve to recognize new threats and provide proactive, auditable guidelines to industry stakeholders.
arXiv Detail & Related papers (2023-08-30T09:19:06Z) - AI Liability Insurance With an Example in AI-Powered E-diagnosis System [22.102728605081534]
We use an AI-powered E-diagnosis system as an example to study AI liability insurance.
We show that AI liability insurance can act as a regulatory mechanism to incentivize compliant behaviors and serve as a certificate of high-quality AI systems.
arXiv Detail & Related papers (2023-06-01T21:03:47Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.