APPRAISE: a governance framework for innovation with AI systems
- URL: http://arxiv.org/abs/2309.14876v2
- Date: Mon, 11 Dec 2023 16:49:14 GMT
- Title: APPRAISE: a governance framework for innovation with AI systems
- Authors: Diptish Dey and Debarati Bhaumik
- Abstract summary: The EU Artificial Intelligence Act (AIA) is the first serious legislative attempt to contain the harmful effects of AI systems.
This paper proposes a governance framework for AI innovation.
The framework bridges the gap between strategic variables and responsible value creation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As artificial intelligence (AI) systems increasingly impact society, the EU
Artificial Intelligence Act (AIA) is the first serious legislative attempt to
contain the harmful effects of AI systems. This paper proposes a governance
framework for AI innovation. The framework bridges the gap between strategic
variables and responsible value creation, recommending audit as an enforcement
mechanism. Strategic variables include, among others, organization size,
exploration versus exploitation -, and build versus buy dilemmas. The proposed
framework is based on primary and secondary research; the latter describes four
pressures that organizations innovating with AI experience. Primary research
includes an experimental setup, using which 34 organizations in the Netherlands
are surveyed, followed up by 2 validation interviews. The survey measures the
extent to which organizations coordinate technical elements of AI systems to
ultimately comply with the AIA. The validation interviews generated additional
in-depth insights and provided root causes. The moderating effect of the
strategic variables is tested and found to be statistically significant for
variables such as organization size. Relevant insights from primary and
secondary research are eventually combined to propose the APPRAISE framework.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Trustworthy AI: Deciding What to Decide [41.10597843436572]
We propose a novel framework of Trustworthy AI (TAI) encompassing crucial components of AI.
We aim to use this framework to conduct the TAI experiments by quantitive and qualitative research methods.
We formulate an optimal prediction model for applying the strategic investment decision of credit default swaps (CDS) in the technology sector.
arXiv Detail & Related papers (2023-11-21T13:43:58Z) - Assessing AI Impact Assessments: A Classroom Study [14.768235460961876]
Artificial Intelligence Impact Assessments ("AIIAs"), a family of tools that provide structured processes to imagine the possible impacts of a proposed AI system, have become an increasingly popular proposal to govern AI systems.
Recent efforts from government or private-sector organizations have proposed many diverse instantiations of AIIAs, which take a variety of forms ranging from open-ended questionnaires to graded score-cards.
We conduct a classroom study at a large research-intensive university (R1) in an elective course focused on the societal and ethical implications of AI.
We find preliminary evidence that impact assessments can influence participants' perceptions of the potential
arXiv Detail & Related papers (2023-11-19T01:00:59Z) - AI Deception: A Survey of Examples, Risks, and Potential Solutions [20.84424818447696]
This paper argues that a range of current AI systems have learned how to deceive humans.
We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth.
arXiv Detail & Related papers (2023-08-28T17:59:35Z) - A Game-Theoretic Framework for AI Governance [8.658519485150423]
We show that the strategic interaction between the regulatory agencies and AI firms has an intrinsic structure reminiscent of a Stackelberg game.
We propose a game-theoretic modeling framework for AI governance.
To the best of our knowledge, this work is the first to use game theory for analyzing and structuring AI governance.
arXiv Detail & Related papers (2023-05-24T08:18:42Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - SoK: On the Semantic AI Security in Autonomous Driving [42.15658768948801]
Autonomous Driving systems rely on AI components to make safety and correct driving decisions.
For such AI component-level vulnerabilities to be semantically impactful at the system level, it needs to address non-trivial semantic gaps.
In this paper, we define such research space as semantic AI security as opposed to generic AI security.
arXiv Detail & Related papers (2022-03-10T12:00:34Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.