An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020)
- URL: http://arxiv.org/abs/2105.03192v1
- Date: Fri, 7 May 2021 12:01:31 GMT
- Title: An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020)
- Authors: Gauthier Chassang (INSERM,PFGS), Mogens Thomsen (INSERM), Pierre
Rumeau, Florence S\`edes (IRIT), Alejandra Delfin (INSERM)
- Abstract summary: This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
- Score: 55.41644538483948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a comprehensive analysis of existing concepts coming from
different disciplines tackling the notion of intelligence, namely psychology
and engineering, and from disciplines aiming to regulate AI innovations, namely
AI ethics and law. The aim is to identify shared notions or discrepancies to
consider for qualifying AI systems. Relevant concepts are integrated into a
matrix intended to help defining more precisely when and how computing tools
(programs or devices) may be qualified as AI while highlighting critical
features to serve a specific technical, ethical and legal assessment of
challenges in AI development. Some adaptations of existing notions of AI
characteristics are proposed. The matrix is a risk-based conceptual model
designed to allow an empirical, flexible and scalable qualification of AI
technologies in the perspective of benefit-risk assessment practices,
technological monitoring and regulatory compliance: it offers a structured
reflection tool for stakeholders in AI development that are engaged in
responsible research and innovation.Pre-print version (achieved on May 2020)
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - AI Literacy for All: Adjustable Interdisciplinary Socio-technical Curriculum [0.8879149917735942]
This paper presents a curriculum, "AI Literacy for All," to promote an interdisciplinary understanding of AI.
The paper presents four pillars of AI literacy: understanding the scope and technical dimensions of AI, learning how to interact with Gen-AI in an informed and responsible way, the socio-technical issues of ethical and responsible AI, and the social and future implications of AI.
arXiv Detail & Related papers (2024-09-02T13:13:53Z) - Responsible Artificial Intelligence: A Structured Literature Review [0.0]
The EU has recently issued several publications emphasizing the necessity of trust in AI.
This highlights the urgent need for international regulation.
This paper introduces a comprehensive and, to our knowledge, the first unified definition of responsible AI.
arXiv Detail & Related papers (2024-03-11T17:01:13Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - POLARIS: A framework to guide the development of Trustworthy AI systems [3.02243271391691]
There is a significant gap between high-level AI ethics principles and low-level concrete practices for AI professionals.
We develop a novel holistic framework for Trustworthy AI - designed to bridge the gap between theory and practice.
Our goal is to empower AI professionals to confidently navigate the ethical dimensions of Trustworthy AI.
arXiv Detail & Related papers (2024-02-08T01:05:16Z) - A Vision for Operationalising Diversity and Inclusion in AI [5.4897262701261225]
This study seeks to envision the operationalization of the ethical imperatives of diversity and inclusion (D&I) within AI ecosystems.
A significant challenge in AI development is the effective operationalization of D&I principles.
This paper proposes a vision of a framework for developing a tool utilizing persona-based simulation by Generative AI (GenAI)
arXiv Detail & Related papers (2023-12-11T02:44:39Z) - Artificial intelligence in government: Concepts, standards, and a
unified framework [0.0]
Recent advances in artificial intelligence (AI) hold the promise of transforming government.
It is critical that new AI systems behave in alignment with the normative expectations of society.
arXiv Detail & Related papers (2022-10-31T10:57:20Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.